Microsoft's new HPC Pack 2012 Beta Program was recently announced on the Windows HPC Team Blog. The communication promised more finely tuned job scheduling, increased node management control,  MS-MPI ‘message compression’, increased Azure interoperability and SOA job monitoring.

This blog entry includes Excelian's review of the functionality listed above.

More fine tuned job scheduling 

  • Exit code customisation: You can specify task level and/or job level exit codes. Task level exit codes override job level codes. This greatly simplifies managing the applications flow between tasks.

  • Task level failure dependency: You can configure a dependent task to  run if a parent task fails (or passes). In HPC Pack 2008 you could only specify that a task was supposed to run when another finished. You could not specify whether that finished task was supposed to have passed or not. The added control means that you no longer have tasks running blindly when parent tasks have failed.

  • Leading on from task level dependency is job dependency.  A job can now be dependent on another job. When running jobs that depend on the successful completion of the previous job,-  for example Intraday runs-  being able to link jobs adds another layer of error checking that might not  have been easy to handle on the application side 

  • Hold Until: Specifies the date and time until which the HPC Job Scheduler Service should wait before running the job. This used to be a setting only Administrators could use, and only when a job was already queued. It can now be set by users and even before a job is queued. This adds a cron like facility for user jobs and would be useful when a user wants a job to run at a specific time each night when the grid is dormant for example


  • Node group set scheduling: You can choose the node groups a job can run on based on set theory, using the modifiers union, intersection and uniform.  This means that you no longer require all the node groups to be created in advance.

  • Output caching: For each task, the most recent 4000 bytes is cached (before, only the first 4000 bytes were cached).

  • Pre-emption: Pre-emption is now managed on a task level instead of job level. Rescheduling on a task level increases the throughput of a grid that uses pre-emption. You can now have more jobs running to completion than older versions, where entire jobs had to be cancelled and rescheduled.

  • Run time job and task properties can now be changed at any point during job execution. In the case of minor mistakes being discovered in the job settings during runtime, the job no longer needs to be cancelled and stopped before being able to resolve them.


  • Node reservation: You can configure the scheduler to run a job on a single node.  This is useful when running MPI jobs, there can be instances where splitting a job across nodes would be detrimental to efficiency, if passing data between the ranks.

Increased node management control
You can manage the power plan or power scheme on your HPC Cluster nodes by running a diagnostic test on the nodes (Active Power Scheme Report diagnostic test).  It tests to make sure that the nodes are using the correct power plan. a href=In most cases, this is high performance. Power plans can be changed using a node template, either in bare metal deployment, in pre-configured node addition or in the Maintenance section of the node template. This feature is not particularly useful as most organisations would manage their power plans using Group Policies.


MS-MPI improvements
Collective operations in the Microsoft Message Passing Interface now takes advantage of hierarchical processor topologies.

Microsoft MPI now uses message compression on messages sent over the sockets channel to improve performance . Although there is a slight overhead in time needed to perform the compression, overall, applications will spend less time waiting for communication between processes. Performance gains are correlated to the hardware configuration.

There does not seem to be a way to configure when compression occurs or what level of compression should be done. This is where Microsoft HPC Pack falls behind its competitors. The IBM  Symphony API lists data compression flags that enable you to modify the behaviour. ‘BEST_SPEED’ uses the fastest compression method, and ‘BETTER_SIZE’ compresses the data even more, but with a trade-off in speed.

Increased Azure interoperability

  • Nodes can be deployed in Windows Azure deployments in which Windows Azure Virtual Network is available.

  • Cluster administrators can now configure the number of proxy nodes that a Windows Azure deployment uses.

  • Cluster administrators now can specify an application virtual hard disk (VHD) that is automatically mounted when Azure worker role instances are provisioned.

  • You can now stage data on an HPC cluster for use on both on-premises and Windows Azure nodes without concern about whether the nodes in those different.

SOA job monitoring

New in HPC 2012 is the ability to see detailed information about the progress of SOA jobs and sessions, and to view message level traces for SOA sessions. In Microsoft HPC Pack 2008 R2, only service task level information is visible. The tasks within each session were listed but there was no way of knowing how long each task took to finish. The new functionality provides a much greater visibility over the SOA applications running on the grid.

With Event logging turned on, you can explore the messages that were passed to the grid during the execution of a job. This moves the onus of keeping track of individual message execution times from the development side to the grid side.

The ability to monitor SOA jobs in details marks a dramatic leap forward in the functionality of Microsoft HPC. Where developers and grid administrators were blind to the job execution details, we now have a ‘under the hood’ view of SOA jobs.

Microsoft is certainly heading down the right path when it comes to the new functionality being offered in HPC 2012, closing the gap in terms of features with competitor products like IBM Symphony and Tibco GridServer.