Using open source client and server software on a cluster of remote Windows instances to teach imaging software and communications

At a recent PET imaging conference in Finland, I taught a three-session workshop in practical medical imaging for scientists and physicians. The class was oriented toward research rather than clinical medicine, so we were interested in image formats besides DICOM, since a lot of research software is developed for specific uses in academic environments and uses application-specific formats. NIFTI is the file format of choice in neuro imaging, and so this was our preferred format besides DICOM.

In a previous iteration of this course, I’d had the participants install the necessary software on their own laptops. This gives more exposure to the real-life problems involved in installing software, but led to too much time being spent on technical and logistical issues. Not everyone had full access to administrative accounts on their PCs, if they were company-supplied. Several of the programs used were Java-based, and installing and authorising Java created some problems.

For this year’s workshop, I wanted the participants to be able to concentrate more on the imaging programs. As it’s now so easy to deploy Windows machines in Amazon Web Service’s EC2 service, AWS t2.small instances (2 GB RAM, 1 CPU) were chosen as the platform of choice. Users log in to the instances remotely using Microsoft Remote Desktop from a Mac or PC, and with a decent connection it’s possible to work interactively on the remote Windows desktop. The software and test data for the course are pre-installed on the workstations, and the workshop participants could if they wish install the same or equivalent software on their own computers, for a bit more of a challenge. The OS on Amazon is Windows Server 2012, which is sufficiently similar to Windows 8 or 10 that most people find it familiar to work on.

Software

A master instance is created that later would be cloned by exporting the instance’s disk to an Amazon Machine Image (AMI), from which any number of instances can be launched. On the master instance I set things up to provide a range of imaging capabilities:

  • Installed all the latest Windows updates.
  • Turned down internet security, and installed Chrome and Firefox.
  • Created a user with administrative and remote login rights.
  • Installed software:
    • Java, with permissions and without their spammy toolbar and other unwanted changes.
    • Cygwin and Powershell shells.
    • RadiAnt, Synedra and Ginkgo CADx DICOM image viewers and PACS nodes.
    • Mango for viewing and inspecting headers of DICOM/NIFTI files.
    • VINCI for advanced image analysis.
    • dcm4che3 and DCMTK command-line DICOM tools.
    • Orthanc DICOM server with plugins for web viewer and DICOMWeb.
    • DICOMBrowser and LONI Inspector for viewing and comparing image headers.
    • MRIConvert for DICOM conversion.
    • DicomCleaner for DICOM anonymisation.
    • dcm2niix for DICOM to NIFTI conversion through the GUI and command line.
  • Set up the Windows path for the command-line programs.
  • Opened ports for DICOM communication (8042 and 4242 for Orthanc, 11112 for RadiAnt)

Several of the programs perform the same or similar tasks: three DICOM viewers, two command-line DICOM toolkits, several DICOM header viewers. This helps to demonstrate that the same functionality is available in different programs.

Overall, the programs worked well togehter. The workshop participants downloaded sample data files from I Do Imaging and we used the inspectors to examine the headers of DICOM and NIFTI files. After converting from DICOM to NIFTI by several methods, we looked at how the different conversion programs created different header values, and we used VINCI to examine the small differences in the voxel values resulting from the conversions. We also performed anonymisation with one-click and configurable processes and looked at what was altered in the image files.

The participants connected to public PACS servers hosted at I Do Imaging, also those at PixelMed and Medical Connections. Since the I Do Imaging public PACS was re-implemented using dcm4chee Archive 5, where C-GET has been deprecated, it currently can be used only for searching, not downloading, data. C-MOVE requires prior knowledge of the requesting workstation, which of course is not a possibility for a public PACS.

Introducing the Command Line

The theme of the workshop is increasing efficiency through better procedures. In part this is achieved through choosing the right software, and I aim to encourage the participants to make use of the vast selection of free medical imaging software that’s available. Another goal is to move away from manual processes, and even if there’s not time to teach automation processes, to at least demonstrate how with a little knowledge it’s possible to get the computer to do the routine work for you, and do it much faster and more accurately than doing it manually.

The automation workflow I teach has three levels:

  • Prototype the end-to-end processing using GUI tools, and test the result until satisfied.
  • Replicate this workflow using command-line tools, and test that you get the same results.
  • Develop a script or other automation process to call the command-line programs with the correct inputs.

We used several programs to demonstrate the speed and flexibility of command line programs as compared to their GUI counterparts. Both the dcmtk and dcm4che toolkits have a program named dcmdump to dump a DICOM header. Lots of GUI programs can do this, of course, but running from the command line results in text output which can be processed using the full power of the Linux or Windows shell. You could redirect the output of the dump command to a file for later analysis:

dcmdump img_000.dcm > img_000.txt

Or you can use a shell loop to extract one field from a number of files. Let’s say you have a directory containing images from multiple series in a study. You can quickly extract the series times using the shell constructs for, grep and sort:

$ for f in *dcm ; do dcmdump $f | grep SeriesTime; done | sort -u
(0008,0031) TM [002858.419]                             #  10, 1 SeriesTime
(0008,0031) TM [004553.476]                             #  10, 1 SeriesTime

You could achieve something similar with a GUI program such as LONI Inspector, but performing it at the command line with text tools is faster, more flexible, and can be automated, with the results from one program fed in to another to form a processing pipeline.

Learning to use the shell is a long process that can’t be taught in one class, but it’s helpful to expose people to command-line processing at an eary stage to demonstrate what’s possible. It’s particularly instructive to first run a process in a GUI program to understand the procedure, then to re-implement the process on the command line.

Running a local DICOM PACS

To provide a working PACS server that could be used as a data source, I needed to configure a list of DICOM nodes, one per workstation, with known IP numbers and AE titles. This allows eash node to retrieve data using the C-MOVE request, which requires that the remote nodes be pre-configured into the server.

This was achieved by having each workstation having a network interface on a private subnet, with a configureable IP address. The convention used was to have the AE title correspond to the private IP of each workstation: the private subnet was 172.31.15 and the nodes were configured starting with private IP 172.31.15.1, AE Title WORKSHOP01, on up to 20.

Separately from the private IP, each workstation needed to be accessed from the Internet. This was accomplished by setting up a DNS record to point to the dynamically allocated public IP address of each workstation. Each person would then use the fully qualified DNS name to connect to the workstation using Remote Desktop. How this was all set up is covered in the blog post on creating AWS instances

Orthanc as a PACS server

Because we could control the private IP addresses of the workstations, the configuration of the server used (Orthanc) was quite simple. Orthanc is configured using a plain-text JSON file, and the server parameters are configured in a section of this file.

/**
 * Configuration of the DICOM server
 **/

// Enable the DICOM server.
"DicomServerEnabled" : true,

// The DICOM Application Entity Title
"DicomAet" : "ORTHANC",

// Check whether the called AET corresponds during a DICOM request
"DicomCheckCalledAet" : false,

// The DICOM port
"DicomPort" : 4242,

The remote nodes (each workshop participant’s workstation), which would be querying and receiving from the server, are configured in the ‘Network topology’ section of that file. Each node’s entry has an identifying name, and an array of the AE Title, address, and port number of the node. Our AE Titles and (private) IP number addresses are sequentially numbered starting at 1, and we are using port 11112 on each node.

/**
 * Network topology
 **/

// The list of the known DICOM modalities
"DicomModalities" : {
  "workstation01" : [ "WORKSTATION01", "172.31.15.1",  11112 ],
  "workstation02" : [ "WORKSTATION02", "172.31.15.2",  11112 ],
  // And so on...
  "workstation20" : [ "WORKSTATION20", "172.31.15.20", 11112 ]
}

Now each node is configured into the Orthanc server, and can use its AE Title as a C-MOVE destination. We used RadiAnt as a PACS client on each node and were able to query and retrieve from the Orthanc server. As an advanced exercise, we replicated the process on the command line using several of the ‘'’dcm4che3’’’ tools.

In addition to the standard DICOM communications interface, Orthanc has a REST interface that uses standard HTTP communication protocols. We used standard network tools (Postman for GUI, curl for command line) to query the server through its HTTP port.

Orthanc turned out to be particularly suitable for this class. It’s fast and easy to install and configure, needing no external database, and multiple PACS clients could easily be configured in the configuration JSON file using an editor. After gaining experience in querying and retrieving from the master Orthanc server, the class participants can then set up their own server and send images to each other. This is a great introduction to how easy it is to set up your own local image server.