There are generally two kinds of people–those who’ve suffered a severe data loss and those who are about to suffer a severe data loss. I repeatedly jump back and forth between the two kinds.
Recently, a combination of hardware defects and a series of power outages rendered the raidz pool of the NAS of my previous research group unreadable. The OS, an old Solaris 10 x86, would not import the pool with a dreaded I/O error message.
Often in High Performance Computing one needs to know about the various memory metrics of a given program with the peak memory usage probably being the most important one. While the getrusage(2) syscall provides some of that information, it’s use in Fortran programs is far from optimal and there are lots of metrics that are not exposed by it.
On Linux one could simply parse the /proc/PID/status file. Being a simple text file it could easily be processed entirely with the built-in Fortran machinery as shown in the following recipe:
Embracing the current development in educational technologies, the IT Center of the RWTH Aachen University (former Center for Computing and Communication) makes available online the audio recordings of most tutorials delivered during this year’s PPCES seminar. Participation in PPCES is for free and course materials are available online, but this is the first time when proper audio recordings were taken.
All videos (presentation slides + audio) are available on the PPCES YouTube channel under Creative Commons Attribution license.
Recently I stumbled upon this Stack Overflow question. The question author was puzzled with why he doesn’t see any improvement in the resultant value of $\pi$ approximated using a parallel implementation of the well-known Monte Carlo method when he increase the number of OpenMP threads. His expectation was that, since the number of Monte Carlo trials that each thread performs was kept constant, adding more threads would increase linearly the sample size and therefore improve the precision of the approximation.