# Recovering datasets from broken ZFS raidz pools

## Intro

There are generally two kinds of people—those who’ve suffered a severe data loss and those who are about to suffer a severe data loss. I repeatedly jump back and forth between the two kinds.

Recently, a combination of hardware defects and a series of power outages rendered the raidz pool of the NAS of my previous research group unreadable. The OS, an old Solaris 10 x86, would not import the pool with a dreaded I/O error message. We tried importing in various modern OpenSolaris-based live distributions, even forcing the kernel to try and fix errors when possible, to no success. Perhaps disabling the ZIL (because of performance problems with NFS clients) wasn’t that good idea after all. The lack of resources for proper preventive maintenance meant that there were no real backups to restore from. Gone were a lot of research data, source codes, PhD theses, mails, and web content. In the face of the growing despair, as it all happened in the middle of several ongoing project calls, and the rapidly approaching need to accept that the data is most likely gone for good and one has to start anew, I got curious—what could really break in the “unbreakable” ZFS? Previous to that moment, ZFS was to me just a magical filesystem that can do all those things such as cheaply creating multiple filesets and instantaneous snapshots, and I never had real interest in learning how is this all implemented. This time my curiosity won and I asked the sysadmin to wait a while before wiping the disks and let me first poke around the filesystem and see if I could make it readable again. What started as a set of Python scripts to read and display on-disk data structures quickly grew into a very functional minimalistic ZFS implementation capable of reading and exporting entire datasets.

# How to really screw up your mailing list migration (in ten easy steps)

(based on a true story)

Step 0. Find an existing low-volume announcements-only mailing list, e.g. that of a certain scientific software.

Step 1. Migrate the users to a new mailing list meant for both announcements and general discussions.

Step 2. Make the new mailing list unmoderated and set the Reply-To address to be the list submission address.

Step 3. Post the announcement about the change on the list itself.

Step 4. Realise that many users have completely forgotten that they were on the former list (because it was a really low-volume one).

Step 5. Watch as angry unsubscribe messages (mostly from people who have already forgotten about the software) start hitting the list and result in a quickly growing avalanche of angry unsubscribe requests.

Step 6. Realise that step 2 was a really really really dumb one.

Step 7. Realise that step 1 was probably dumb too.

Step 8. Unsubscribe everyone from the new mailing list.

Step 9. Spend some time cleaning up your inbox.

Done.

Optional (recommended): Let an experienced list administrator show you how to do it properly with two separate lists. It’s not like your server can’t handle two instead of one, is it?

Never let scientists do the work of the system administrators!

# Recipe: Obtaining peak VM size in pure Fortran

Often in High Performance Computing one needs to know about the various memory metrics of a given program with the peak memory usage probably being the most important one. While the getrusage(2) syscall provides some of that information, it’s use in Fortran programs is far from optimal and there are lots of metrics that are not exposed by it.

On Linux one could simply parse the /proc/PID/status file. Being a simple text file it could easily be processed entirely with the built-in Fortran machinery as shown in the following recipe:

program test
integer :: vmpeak

call get_vmpeak(vmpeak)
print *, 'Peak VM size: ', vmpeak, ' kB'
end program test

!---------------------------------------------------------------!
! Returns current process' peak virtual memory size             !
! Requires Linux procfs mounted at /proc                        !
!---------------------------------------------------------------!
! Output: peak - peak VM size in kB                             !
!---------------------------------------------------------------!
subroutine get_vmpeak(peak)
implicit none
integer, intent(out) :: peak
character(len=80) :: stat_key, stat_value
!
peak = 0
open(unit=1000, name='/proc/self/status', status='old', err=99)
do while (.true.)
if (stat_key == 'VmPeak:') then
exit
end if
end do
88 close(unit=1000)
if (peak == 0) goto 99
return
!
99 print *, 'ERROR: procfs not mounted or not compatible'
peak = -1
end subroutine get_vmpeak


The code accesses the status file of the calling process /proc/self/status. The unit number is hard-coded which could present problems in some cases. Modern Fortran 2008 compilers support the NEWUNIT specifier and the following code could be used instead:

integer :: unitno

open(newunit=unitno, name='/proc/self/status', status='old', err=99)
! ...
close(unit=unitno)


With older compilers the same functionality could be simulated using the following code.

# MPI programming basics

Embracing the current development in educational technologies, the IT Center of the RWTH Aachen University (former Center for Computing and Communication) makes available online the audio recordings of most tutorials delivered during this year’s PPCES seminar. Participation in PPCES is for free and course materials are available online, but this is the first time when proper audio recordings were taken.

All videos (presentation slides + audio) are available on the PPCES YouTube channel under Creative Commons Attribution license. Course materials are available in the PPCES 2014 archive under unclear (read: do not steal blatantly) license.

My own contribution to PPCES - as usual - consists of:

• Message passing with MPI, part 1: Basic concepts and point-to-point communication

• Message passing with MPI, part 2: Collective operations and often-used patterns

• Tracing and profiling MPI applications with VampirTrace and Vampir

Big thanks to all the people who made recording and publishing the sessions possible.

# Linear congruency considered harmful

Recently I stumbled upon this Stack Overflow question. The question author was puzzled with why he doesn’t see any improvement in the resultant value of $\pi$ approximated using a parallel implementation of the well-known Monte Carlo method when he increase the number of OpenMP threads. His expectation was that, since the number of Monte Carlo trials that each thread performs was kept constant, adding more threads would increase linearly the sample size and therefore improve the precision of the approximation. He did not observe such improvement and blamed it on possible data races although all proper locks were in place. The question seems to be related to an assignment that he got at his university. What strikes me is the part of the assignment, which requires that he should use a specific linear congruential pseudo-random number generator (LCPRNG for short). In his case a terrible LCPRNG.

An inherent problem with all algorithmic pseudo-random number generators is that they are deterministic and only mimic randomness since each new output is a well-defined function of the previous output(s) (thus the pseudo- prefix). The more previous outputs are related together, the better the “randomness” of the output sequence could be made. Since the internal state can only be of a finite length, every now and then the generator function would map the current state to one of the previous ones. At that point the generator starts repeating the same output sequence again and again. The length of the unique part of the sequence is called the cycle length of the generator. The longer the cycle length, the better the PRNG.

Linear congruency is the worst method for generating pseudo-random numbers. The only reason it is still used is that it is extremely easy to be implemented, takes very small amount of memory, and it works acceptably well in some cases if the parameters are chosen wisely. It’s just that Monte Carlo simulations are rarely that cases. So what is the problem with LCPRNGs? The problem is that their output depends solely on the previous one as the congruential relation is

\begin{equation*} p_{i+1} \equiv (A \cdot p_i + B)\,(mod\,C), \end{equation*}

where $A$, $B$ and $C$ are constants. If the initial state (the seed of the generator) is $p_0$, then the i-th output is the result of $i$ applications of the generator function $f$ to the initial state, $p_i = f^i(p_0)$. When it happens that an output repeats the initial state, i.e. $p_N = p_0$ for some $N > 0$, the generator loops since

\begin{equation*} p_{N+i} = f^{N+i}(p_0) = f^i(f^N(p_0)) = f^i(p_N) = f^i(p_0) = p_i. \end{equation*}

As is also true with the human society, short memory leads to history repeating itself in (relatively short) cycles.

The generator from the question uses $C = 741025$ and therefore it produces pseudo-random numbers in the range $[0, 741024]$. For each test point two numbers are sampled consecutively from the output sequence, therefore a total of $C^2$ or about 550 billion points are possible. Right? Wrong! The choice of parameters results in this particular LCPRNG having a cycles length of 49400, which is orders of magnitude worse than the otherwise considered bad ANSI C pseudo-random generator rand(). Since the cycle length is even, once the sequence folds over, the same set of 24700 points is repeated over and over again. The unique sequence covers $49400/C$ or about 6,7% of the output range (which is already quite small).

A central problem in Monte Carlo simulations is the so called ergodicity or the ability of the simulated system to pass through all possible states. Because of the looping character of the LCPRNG and the very short cycle length, there are many states that remain unvisited and therefore the simulation exhibits really bad ergodicity.  Not only this, but the output space is partitioned into 16 ($\lceil C/49400\rceil$) disjoint sets and there are only 16 unique initial values (seeds) possible. Therefore only 32 different sets of points can be drawn from that generator (why 32 and not 16 is left as an exercise to the reader).

How this relates to the bad approximation of $\pi$? The method used in the question is a geometric approximation based on the idea that if a set of points $\{ P_i \}$ is drawn randomly and uniformly from $[0, 1) \times [0, 1)$, the probably that such a point lies inside a unit circle centred at the origin of the coordinate system is $\frac{\pi}{4}$. Therefore:

\begin{equation*} \pi \approx 4\frac{\sum_{i=1}^N \theta{}(P_i)}{N}, \end{equation*}

where $\theta{}(P_i)$ is an indicator function that has a value of 1 for all points $\{ P(x,y): x^2+y^2 \leq 1\}$ and 0 for all other points and $N$ is the number of trials. Now it is well known that the precision of the approximation is proportional to $1/\sqrt{N}$ and therefore more trials give better results. The problem in this case is that due to the looping nature of the LCPRNG, the sum in the nominator is simply $m \times S_0$, where $S_0 = \sum_{i=1}^{24700} \theta(P_i)$. For large $N$ we have $m \approx N/24700$ and therefore the approximation is stuck at the value of:

\begin{equation*} \tilde{\pi} = 4 \frac{\sum_{i=1}^{24700} \theta(P_i)}{24700}. \end{equation*}

It doesn’t matter if one samples 24700 points or if one samples 247000000 points. The result is going to be the same and the precision in the latter case is not going to be 100 times better but rather exactly the same as in the former case with 9999 times the computational resources used in the former case now effectively wasted.