Skip to main content

Measuring memory per Unix process

Wow, it's been a long time since I've written anything here. Since my last blog post, I've been taking two classes (in two very different topics), along with the other daily responsibilities of working, raising two children, etc.

For a work-related requirement, I had to figure out how to measure memory consumed by a specific Unix/Linux process. This is more difficult than it may seem.

For one thing, apparently the memory statistics given by Linux are meaningless. This is mainly because the ps command (and the VSZ metric specifically) only lists the size of the address space referenced by a process, not the actual memory size itself. This page suggests the use of smaps, where /proc/$pid/smaps provides the actual amount of memory used by a process.

Because the output of smaps is pretty lengthy, Someone wrote a python script called mem_usage.py to make the output more understandable.

The main issue is that smaps only exists in Linux, and I had a requirement to measure memory usage in OS X too. OS X doesn't even support the /proc concept so I couldn't use smaps.

In the end, I wound up going back to ps and relying on the RSS (resident set size) metric. It's displayed on multiple operating systems and indicates how much RAM is being used for the text and data segments for a specific process in kilobytes.

11:52:44[~/ruby:162]$ ps u
USER    PID  %CPU %MEM      VSZ    RSS   TT  STAT STARTED      TIME COMMAND
rpark  2668   0.6  1.4  1652720  44816 s000  S    12:40AM   5:01.71 ./eclipse
rpark  4598   0.0  0.1  2435088   1800 s001  S+   11:01PM   0:00.09 ssh rpark@1
rpark  2878   0.0  1.2  2769524  36688 s000  S     1:17AM   0:25.44 /System/Lib
rpark   799   0.0  0.0  2435468    792 s001  S    Tue12PM   0:00.18 -bash
rpark   535   0.0  0.0  2435468    756 s000  S    Tue11AM   0:00.30 -bash

Here are some more discussions on this topic.

Comments

Popular posts from this blog

Building a Hadoop cluster

I've recently had to build a Hadoop cluster for a class in information retrieval . My final project involved building a Hadoop cluster. Here are some of my notes on configuring the nodes in the cluster. These links on configuring a single node cluster and multi node cluster were the most helpful. I downloaded the latest Hadoop distribution then moved it into /hadoop. I had problems with this latest distribution (v.21) so I used v.20 instead. Here are the configuration files I changed: core-site.xml: fs.default.name hdfs://master:9000 hadoop.tmp.dir /hadoop/tmp A base for other temporary directories. hadoop-env.sh: # Variables required by Mahout export HADOOP_HOME=/hadoop export HADOOP_CONF_DIR=/hadoop/conf export MAHOUT_HOME=/Users/rpark/mahout PATH=/hadoop/bin:/Users/rpark/mahout/bin:$PATH # The java implementation to use. Required. export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home hdfs-site

Working with VMware vShield REST API in perl

Here is an overview of how to use perl code to work with VMware's vShield API. vShield App and Edge are two security products offered by VMware. vShield Edge has a broad range of functionality such as firewall, VPN, load balancing, NAT, and DHCP. vShield App is a NIC-level firewall for virtual machines. We'll focus today on how to use the API to programatically make firewall rule changes. Here are some of the things you can do with the API: List the current firewall ruleset Add new rules Get a list of past firewall revisions Revert back to a previous ruleset revision vShield API documentation is available here . Before we get into the API itself, let's look at what the firewall ruleset looks like. It's formatted as XML: 1.1.1.1/32 10.1.1.1/32 datacenter-2 ANY 1023 High 1 ANY < Application type="UNICAST">LDAP over SSL 636 TCP ALLOW deny 1020 Low 3 ANY IMAP 143 TCP < Action>ALLOW false Here are so

The #1 Mistake Made by Product People at All Levels

In my 20+ year career in product management for B2B enterprise companies, I have seen product managers at every level make a certain kind of mistake. It is so easy to make that I occasionally make it myself when I'm not careful. What is this mistake? It is to spend too much time on tasks and deliverables that are not core to the product function, which is to to determine and define products to be built. If you keep falling into this trap then ultimately you can't be effective at your job and your company won't sell compelling products. Your primary job as a product manager is to figure out what your market and customers need and make sure it gets built. If you aren't careful, you can spend all of your time performing tasks in support of products such as sales enablement, customer success, product marketing, and pre-sales. How Do You Know This Is Happening? It is easy to fall into this trap for many reasons. Here are a few scenarios that come to mind: Product Marketing