
AWS ParallelCluster
AWS ParallelCluster can be used with batch schedulers, such as AWS Batch
and Slurm. Learn more about this technology here.
AWS ParallelCluster services
Automated cluster
Define a config (YAML) and let ParallelCluster build compute, storage, network automatically.
Flexible scheduling
Using Slurm or AWS Batch, so you can submit jobs, manage queues, auto-scale compute resources depending on demand.
Support for HPC workloads
You’re not limited to small jobs; you can spin up large EC2 instances, GPU-enabled or specialized hardware as needed.
Shared storage and data pipelines
Ability to mount shared filesystems across nodes (EFS / EBS / parallel FS), useful for large-scale simulations, data analysis, ML training.
Our implementation process
1
Configuration
The necessary permissions, networks, and resources are defined in AWS. The environment is ready.
2
Customization
The cluster is adapted to the type of scientific or engineering workloads.
3
Integrations
We connect file systems, containers, and HPC frameworks.
4
Monitoring
Metrics, alerts, and autoscaling rules are enabled. The cluster remains optimized.


