Skip to main content

Measuring memory per Unix process

Wow, it's been a long time since I've written anything here. Since my last blog post, I've been taking two classes (in two very different topics), along with the other daily responsibilities of working, raising two children, etc.

For a work-related requirement, I had to figure out how to measure memory consumed by a specific Unix/Linux process. This is more difficult than it may seem.

For one thing, apparently the memory statistics given by Linux are meaningless. This is mainly because the ps command (and the VSZ metric specifically) only lists the size of the address space referenced by a process, not the actual memory size itself. This page suggests the use of smaps, where /proc/$pid/smaps provides the actual amount of memory used by a process.

Because the output of smaps is pretty lengthy, Someone wrote a python script called mem_usage.py to make the output more understandable.

The main issue is that smaps only exists in Linux, and I had a requirement to measure memory usage in OS X too. OS X doesn't even support the /proc concept so I couldn't use smaps.

In the end, I wound up going back to ps and relying on the RSS (resident set size) metric. It's displayed on multiple operating systems and indicates how much RAM is being used for the text and data segments for a specific process in kilobytes.

11:52:44[~/ruby:162]$ ps u
USER    PID  %CPU %MEM      VSZ    RSS   TT  STAT STARTED      TIME COMMAND
rpark  2668   0.6  1.4  1652720  44816 s000  S    12:40AM   5:01.71 ./eclipse
rpark  4598   0.0  0.1  2435088   1800 s001  S+   11:01PM   0:00.09 ssh rpark@1
rpark  2878   0.0  1.2  2769524  36688 s000  S     1:17AM   0:25.44 /System/Lib
rpark   799   0.0  0.0  2435468    792 s001  S    Tue12PM   0:00.18 -bash
rpark   535   0.0  0.0  2435468    756 s000  S    Tue11AM   0:00.30 -bash

Here are some more discussions on this topic.

Comments

Popular posts from this blog

Building a Hadoop cluster

I've recently had to build a Hadoop cluster for a class in information retrieval . My final project involved building a Hadoop cluster. Here are some of my notes on configuring the nodes in the cluster. These links on configuring a single node cluster and multi node cluster were the most helpful. I downloaded the latest Hadoop distribution then moved it into /hadoop. I had problems with this latest distribution (v.21) so I used v.20 instead. Here are the configuration files I changed: core-site.xml: fs.default.name hdfs://master:9000 hadoop.tmp.dir /hadoop/tmp A base for other temporary directories. hadoop-env.sh: # Variables required by Mahout export HADOOP_HOME=/hadoop export HADOOP_CONF_DIR=/hadoop/conf export MAHOUT_HOME=/Users/rpark/mahout PATH=/hadoop/bin:/Users/rpark/mahout/bin:$PATH # The java implementation to use. Required. export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home hdfs-site

Connecting to SQL Server from OS X perl

I've been spending my coding time in the offhours working on Perl instead of Ruby. My coding time in general has been very limited, which is part of the reason for the length of time between updates. :) My latest project is to pull data out of a Microsoft SQL Server database for analysis. I'm using perl for various reasons: I need a crossplatform environment, and I need certain libraries that only work on perl. Some of the target users for my code run on Windows. I know that Ruby runs on Windows but it's not the platform of choice for Ruby developers. The vast majority seem to develop either on OS X or Linux. So Ruby on Windows isn't at the maturity that ActiveState perl is on Windows. In fact, I don't even run native perl anymore on my MacBook Pro. I've switched over to ActiveState perl because I don't need to compile anything every time I want to install new CPAN libraries. And because it's ActiveState, I'm that much more confident it will w

Creating a Hackintosh

I've always wanted to create a "Hackintosh", i.e. a standard PC that runs OS X. My PC is over 5 years old so it was time for a refresh. I figured this was the best time to give the Hackintosh a go. Hardware CPU:  Intel Quad Core i7 4790 3.6 Ghz Motherboard:  GIGABYTE GA-Z97-HD3 Audio:  ALC 887 Network: Realtek 8111F-VL Network Card:  4 Antennas 802.11ac WiFi BCM94360CD Wireless Network Card Graphics Card:  nVidia 750 GTX Memory:  Corsair Vengeance DDR3-1600 32 GB (4x8 GB) Hard Drive : Seagate ST3000DM001 3 TB SATA3 7200 rpm DVD:  Samsung SH-224DB 24X BIOS Changes The first step was to change the BIOS settings to support OS X. Disabling VT-d is the only setting that is clearly required; the others are questionable but were done by others so I thought they were worth trying. F7  to load Optimized Defaults M.I.T. Advanced Frequency Settings Extreme Memory Profile (X.M.P.): Enabled Miscellaneous Settings PCIe Slot Configuration:  Gen