Skip to main content

Connecting to JDBC data source from OS X perl

Here's another blog post on a similar topic. I recently had to figure it out and I documented it already, so I thought I would share it here too so I don't forget.

Here are instructions on connecting to a JDBC data source from OS X perl. I used DBD::JDBC.

The process mainly involves setting up a local Java server that provides the front end for a JDBC driver. The Java code implements the JDBC connection to the data source. The perl code talks to the Java server on localhost via DBI so it can access the database.

Here are the steps:

1. Check out any documentation about required setup for your JDBC data source. For example, you may need to install an SSL certificate from the server to your client. You may also need to ensure that your server permits database access at all from your client or IP address.

2. Download your JDBC driver and put any .jar files into a lib directory. For example, VJDBC is a JDBC driver that enables you to establish JDBC connections over Java RMI. It comes with vjdbc.jar.

3. Install the perl modules Convert::BER, DBI, and DBD::JDBC. You can either install the modules from CPAN or install ActiveState perl. If you install from CPAN, you'll need to install Xcode first so you have a C compiler. If you use ActiveState, use PPM to install the required modules. Note that you'll still need to download the DBD::JDBC tarball from CPAN and install it manually because it doesn't show up in ppm.

4. Copy dbd_jdbc.jar from the DBD::JDBC tarball into the same lib directory as the other .jar files you've collected.

5. Download the latest version of log4j and copy log4j.jar into the same lib directory as above. You should now have at least 3 jar files: dbd_jdbc, log4j-1.2.16, vjdbc.

6. Set your CLASSPATH variable to point to your jar files. If you want this variable to be always set, add it to your .profile.

7. Startup the Java server with this command:

java -Djdbc.drivers=com.sourcefire.vjdbc.VirtualDriver -Ddbd.port=9001 com.vizdom.dbd.jdbc.Server

Note that I set the port to 9001. This is completely arbitrary and can be whatever you want it to be. But whatever you set this to, make sure you indicate this port in your perl code.

You'll probably see something like this:

log4j:WARN No appenders could be found for logger (com.vizdom.dbd.jdbc.Server).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

You're seeing this because the code is looking for a log4j config file. Copy log4j.properties to the same directory you started the Java server so this error message won't appear.

After this, your host is now accepting JDBC connections on port 9001.

8. Here's some sample code that should work:

#!/usr/bin/perl

use strict;
use DBI;

my $user = "user";
my $pass = "password";
my $host = "10.10.10.1";
my $port = 9001;
my $url = "jdbc:vjdbc:rmi://$host:2000/VJdbc,eqe";  # Get this URL from JDBC data src

my %properties = ('user' => $user, 
                  'password' => $pass, 
                  'host.name' => $host, 
                  'host.port' => $port);

my $dsn = "dbi:JDBC:hostname=localhost;port=$port;url=$url";
my $dbh = DBI->connect($dsn, undef, undef, 
          { PrintError => 0, RaiseError => 1, jdbc_properties => \%properties })
          or die "Failed to connect: ($DBI::err) $DBI::errstr\n";
my $sql = qq/select * from table/;
my $sth = $dbh->prepare($sql);
$sth->execute();
my @row;
while (@row = $sth->fetchrow_array) {
 print join(", ", @row), "\n";
}

One last point: your URL and JDBC properties hash are configured with the IP address of the actual JDBC data source. But your data source is configured to talk with localhost. So you establish a JDBC connection to the Java server on localhost on port 9001, and the Java server in turn establishes a connection to the actual data source using the URL.

Comments

Popular posts from this blog

Building a Hadoop cluster

I've recently had to build a Hadoop cluster for a class in information retrieval . My final project involved building a Hadoop cluster. Here are some of my notes on configuring the nodes in the cluster. These links on configuring a single node cluster and multi node cluster were the most helpful. I downloaded the latest Hadoop distribution then moved it into /hadoop. I had problems with this latest distribution (v.21) so I used v.20 instead. Here are the configuration files I changed: core-site.xml: fs.default.name hdfs://master:9000 hadoop.tmp.dir /hadoop/tmp A base for other temporary directories. hadoop-env.sh: # Variables required by Mahout export HADOOP_HOME=/hadoop export HADOOP_CONF_DIR=/hadoop/conf export MAHOUT_HOME=/Users/rpark/mahout PATH=/hadoop/bin:/Users/rpark/mahout/bin:$PATH # The java implementation to use. Required. export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home hdfs-site

Working with VMware vShield REST API in perl

Here is an overview of how to use perl code to work with VMware's vShield API. vShield App and Edge are two security products offered by VMware. vShield Edge has a broad range of functionality such as firewall, VPN, load balancing, NAT, and DHCP. vShield App is a NIC-level firewall for virtual machines. We'll focus today on how to use the API to programatically make firewall rule changes. Here are some of the things you can do with the API: List the current firewall ruleset Add new rules Get a list of past firewall revisions Revert back to a previous ruleset revision vShield API documentation is available here . Before we get into the API itself, let's look at what the firewall ruleset looks like. It's formatted as XML: 1.1.1.1/32 10.1.1.1/32 datacenter-2 ANY 1023 High 1 ANY < Application type="UNICAST">LDAP over SSL 636 TCP ALLOW deny 1020 Low 3 ANY IMAP 143 TCP < Action>ALLOW false Here are so

The #1 Mistake Made by Product People at All Levels

In my 20+ year career in product management for B2B enterprise companies, I have seen product managers at every level make a certain kind of mistake. It is so easy to make that I occasionally make it myself when I'm not careful. What is this mistake? It is to spend too much time on tasks and deliverables that are not core to the product function, which is to to determine and define products to be built. If you keep falling into this trap then ultimately you can't be effective at your job and your company won't sell compelling products. Your primary job as a product manager is to figure out what your market and customers need and make sure it gets built. If you aren't careful, you can spend all of your time performing tasks in support of products such as sales enablement, customer success, product marketing, and pre-sales. How Do You Know This Is Happening? It is easy to fall into this trap for many reasons. Here are a few scenarios that come to mind: Product Marketing