Using Simple Scripts to Acquire Vendor Rich Media Data

As anyone who has had to pull data from rich media providers knows, the process can be tedious and error prone and the reporting interfaces tend to be cumbersome. Many Ad Ops teams wait until the end of the month to reconcile serving discrepancies. Unfortunately that means that any significant discrepancies or errors may cost a publisher revenue, resources, and time.

I decided to invest some time trying to simplify the process and see if I could automate the process of pulling the data. Developing custom reports wasn’t an option since I wanted to store the data in a central location and be able to query it ways that some vendor tools cannot.

The main vendors we use are PointRoll, Unicast, Eyeblaster, and Atlas and by focusing on them, I would be able to collect almost all of the data I wanted. Eyeblaster and PointRoll have data services teams and are very easy to work with. With Unicast and Atlas, I had to do some work-arounds to get the data I needed.

The code is simple and if it helps you, please feel free to use it.  I am sure it will save you some time typing out the variables and DB commands if nothing else.  If you have improvements to the code please let me know and I’ll work with AdMonsters to maintain a repository of the latest revisions. The code lacks error checking and alarming (I plan to put in mail hooks to alarm and give a summary of the data that was acquired). I usually look at the daily delivery via a web status page to see if there is something that went wrong. My scripts work consistently and have not failed except when there has been a delivery issue on the vendor’s part. All vendors have had delivery issues at some point, so checking the data is critical.

PointRoll and Eyeblaster Scripts

Both Pointroll and Eyeblaster allowed me to grab the data via FTP.

Sample Perl script to get Pointroll data

#!/usr/bin/perl
# simple ftp script to pick up pointroll data
# 200912 ver1.0 chris coluzzi - you may use and modify this script
# freely as long as you retain this header
use strict;
use Net::FTP;
use Date::Manip;

my $ftp_server = "ftp.pointroll.net";
my $ftp_user = "_userid_";
my $ftp_pass = "_password_";

my $date = ParseDate("today");
my $query_date = DateCalc($date,"-1 days",$err);

$query_date = substr( $query_date, 0, 8 );
my $Y = substr( $query_date, 0, 4 );
my $M = substr( $query_date, 4, 2 );
my $D = substr( $query_date, 6, 2 );
$query_date = "$M-$D-$Y";

my $METRICS_FILE="MetricsData_".$query_date.".csv";
my $CLICK_FILE="ClickData_".$query_date.".csv";
my $ACTIVITY_FILE="ActivityData_".$query_date.".csv";

my $path = "/adops/rm/pointroll/csv";
chdir($path) or die "Cant chdir to $path $!";

my $ftp = Net::FTP->new( ${ftp_server}, Debug => 0, Passive=> 1);
$ftp->login( ${ftp_user},${ftp_pass} );
$ftp->binary();

$ftp->get("$METRICS_FILE")
or die "get failed ", $ftp->message;

$ftp->get("$CLICK_FILE")
or die "get failed ", $ftp->message;

$ftp->get("$ACTIVITY_FILE")
or die "get failed ", $ftp->message;

$ftp->quit;
exit;

Pointroll delivers three files each day. They are noted in the code as:

$query_date is set to the prior days day.

$METRICS_FILE=”MetricsData_”.$query_date.”.csv”;
$CLICK_FILE=”ClickData_”.$query_date.”.csv”;
$ACTIVITY_FILE=”ActivityData_”.$query_date.”.csv”;

I only load the MetricsData file into the database. The other files have great information in them relating to click and interaction activity but I am mostly interested in the delivery data and the MetricsData file also has interaction information in it.

Here is the code to load the Metrics data into mySQL via perl:

#!/usr/bin/perl
# 200912 ver1.0 chris coluzzi - you may use and modify this script
# freely as long as you retain this header

use DBI;
use DBD::mysql;
use Date::Manip;
use Text::ParseWords;
use strict;

my $database = "DBname";
my $user = "_userid_";
my $pw = "_password_";
my $dsn = "dbi:mysql:$database:localhost:3306";
my $dbh = DBI->connect($dsn, $user, $pw) or die
"Unable to connect: $DBI::errstrn";


my $date = ParseDate("today");
my $Y = substr( $date, 0, 4 );
my $M = substr( $date, 4, 2 );
my $D = substr( $date, 6, 2 );
my $loaddate = $Y."-".$M."-".$D; # mysql load date

my $pointroll_data_date = DateCalc($date,"-1 days");
$pointroll_data_date = substr( $pointroll_data_date, 0, 8 );
$Y = substr( $pointroll_data_date, 0, 4 );
$M = substr( $pointroll_data_date, 4, 2 );
$D = substr( $pointroll_data_date, 6, 2 );
$pointroll_data_date = "$M-$D-$Y";

my ($campaign_startdate_dbfmt, $campaign_enddate_dbfmt,
$placement_startdate_dbfmt, $placement_enddate_dbfmt,
$datadate_dbfmt);

# path to where I stored the data
my $infpath = "/usr/local/apache/htdocs/adops/rm/pointroll/csv";
my $inf = "MetricsData_" . $pointroll_data_date . ".csv";

# input filenames descriptors and temp vars
my ($ifd, $ln, $fld, $sth, $stmt);
my @flds;

# field vars
my ($advertiser, $campaignid, $campaign, $campaign_startdt,
$campaign_enddt, $rich_media, $publisher, $placementid, $placement,
$placement_startdt, $placement_enddt, $placement_size, $creativeid,
$creative, $displayformat, $datadate, $impressions,
$rich_media_imps, $interactions, $interaction_rate,
$average_brand_interaction_time_seconds,
$total_brand_interaction_time_hours, $banner_clicks, $banner_ctr,
$panel_clicks, $panel_ctr, $total_clicks, $total_ctr,
$banner_activities, $panel_activities);

open($ifd,"<$infpath/$inf") || die "n error: $ifd $inf n";
while ($ln = <$ifd>) {
#skip header and blank lines
if ($ln =~ /^^M/ || $ln =~ /^ /) {
next;
}
chomp $ln;

@flds= quotewords(",", 0, $ln);
foreach $fld (@flds) {
$fld =~ s/[,'&!"]//g;
}

$advertiser = $flds[0];
$campaignid = $flds[1];
$campaign = $flds[2];
$campaign_startdt = $flds[3];
$campaign_enddt = $flds[4];
$rich_media = $flds[5];
$publisher = $flds[6];
$placementid = $flds[7];
$placement = $flds[8];
$placement_startdt = $flds[9];
$placement_enddt = $flds[10];
$placement_size = $flds[11];
$creativeid = $flds[12];
$creative = $flds[13];
$displayformat = $flds[14];
$datadate = $flds[15];
$impressions = $flds[16];
$rich_media_imps = $flds[17];
$interactions = $flds[18];
$interaction_rate = $flds[19];
$average_brand_interaction_time_seconds = $flds[20];
$total_brand_interaction_time_hours = $flds[21];
$banner_clicks = $flds[22];
$banner_ctr = $flds[23];
$panel_clicks = $flds[24];
$panel_ctr = $flds[25];
$total_clicks = $flds[26];
$total_ctr = $flds[27];
$banner_activities = $flds[29];
$panel_activities = $flds[30];

if ($campaignid eq "CampaignID") { next; } #skip the header

($M,$D,$Y) = split(///,$campaign_startdt,3);
$campaign_startdate_dbfmt = $Y."-".$M."-".$D;

($M,$D,$Y) = split(///,$campaign_enddt,3);
$campaign_enddate_dbfmt = $Y."-".$M."-".$D;

($M,$D,$Y) = split(///,$placement_startdt,3);
$placement_startdate_dbfmt = $Y."-".$M."-".$D;

($M,$D,$Y) = split(///,$placement_enddt,3);
$placement_enddate_dbfmt = $Y."-".$M."-".$D;

($M,$D,$Y) = split(///,$datadate,3);
$datadate_dbfmt = $Y."-".$M."-".$D;

$stmt = "INSERT INTO pointroll (
loaddate,
advertiser,
campaignid,
campaign,
campaign_startdt,
campaign_enddt,
rich_media,
publisher,
placementid,
placement,
placement_startdt,
placement_enddt,
placement_size,
creativeid,
creative,
displayformat,
datadate,
impressions,
rich_media_imps,
interactions,
interaction_rate,
average_brand_interaction_time_seconds,
total_brand_interaction_time_hours,
banner_clicks,
banner_ctr,
panel_clicks,
panel_ctr,
total_clicks,
total_ctr,
banner_activities,
panel_activities) VALUES (
"$loaddate",
"$advertiser",
"$campaignid",
"$campaign",
"$campaign_startdate_dbfmt",
"$campaign_enddate_dbfmt",
"$rich_media",
"$publisher",
"$placementid",
"$placement",
"$placement_startdate_dbfmt",
"$placement_enddate_dbfmt",
"$placement_size",
"$creativeid",
"$creative",
"$displayformat",
"$datadate_dbfmt",
"$impressions",
"$rich_media_imps",
"$interactions",
$interaction_rate,
$average_brand_interaction_time_seconds,
$total_brand_interaction_time_hours,
"$banner_clicks",
$banner_ctr,
"$panel_clicks",
$panel_ctr,
"$total_clicks",
$total_ctr,
"$banner_activities",
"$panel_activities")";



$sth = $dbh->prepare($stmt);
$sth->execute();
} # while
close($inf);
$sth->finish();
$dbh->disconnect();
exit();

Here is the DB create statement I use. Note .. I do waste some space here but the DB is very manageable at this point and I have been loading data into it for a little over half a year.

mySQL db create statement for Pointroll MetricsData

create table pointroll (
pr_id int not null auto_increment primary key,
loaddate DATE,
advertiser VARCHAR(125),
campaignid VARCHAR(125),
campaign VARCHAR(125),
campaign_startdt DATE,
campaign_enddt DATE,
rich_media VARCHAR(125),
publisher VARCHAR(125),
placementid VARCHAR (125),
placement VARCHAR (125),
placement_startdt DATE,
placement_enddt DATE,
placement_size VARCHAR(125),
creativeid VARCHAR(125),
creative VARCHAR(125),
displayformat VARCHAR(125),
datadate DATE,
impressions INT,
rich_media_imps INT,
interactions INT,
interaction_rate DECIMAL(8,5),
average_brand_interaction_time_seconds DECIMAL(8,5),
total_brand_interaction_time_hours DECIMAL(8,5),
banner_clicks INT, banner_ctr DECIMAL(8,5),
panel_clicks INT, panel_ctr DECIMAL(8,5),
total_clicks int, total_ctr DECIMAL(8,5),
banner_activities INT, panel_activities INT);

Eyeblaster uses the same method (FTP) to deliver the data to me.   Except there are a lot more files to load if you so choose.  I am including this script just to save you some effort and typing.  I do some renaming at the end.  Their naming convention has spaces in it so I rename for better sorting and because I don’t like spaces in file names.

Sample Perl script to get Eyeblaster data

#!/usr/bin/perl
# simple script to get eyeblaster data
# 200912 ver1.0 chris coluzzi - you may use and modify this script
# freely as long as you retain this header
use strict;
use Net::FTP;
use Date::Manip;

my $ftp_server = "ftp.eyeblaster.com";
my $ftp_user = "_userid_;
my $ftp_pass = "_password_";

my ($err, $out);
my $log = "eyeblaster_log.txt";

my ($ofd, $nfd);

my $date = ParseDate("today");
my $query_date = DateCalc($date,"-1 days",$err);

$query_date = substr( $query_date, 0, 8 );
my $Y = substr( $query_date, 0, 4 );
my $M = substr( $query_date, 4, 2 );
my $D = substr( $query_date, 6, 2 );
$query_date = "$Y$M$D";

# path to where you would like to save the csv files
my $path = "/adops/rm/eyeblaster/csv";
open ($out, ">>", "$path/$log") or die
"error opening $path/log $! n";
print $out "n - - - - $date - - - - n";

my $Conversions_FILE="System Site_About.com_".$query_date."_Conversions";
my $CustomInteractions_FILE="System Site_About.com_".$query_date."_CustomInteractions";
my $Expandables_FILE="System Site_About.com_".$query_date."_Expandables";
my $Flights_FILE="System Site_About.com_".$query_date."_Flights";
my $MetaData_FILE="System Site_About.com_".$query_date."_MetaData";
my $Unique_FILE="System Site_About.com_".$query_date."_Unique";
my $Videos_FILE="System Site_About.com_".$query_date."_Videos";

chdir($path) or die "Cant chdir to $path $!";

my $ftp = Net::FTP->new( ${ftp_server}, Debug => 0, Passive=> 1);
$ftp->login( ${ftp_user},${ftp_pass} );
$ftp->binary();

$ftp->get("$Conversions_FILE.csv")
or print $out "n $Conversions_FILE.csv get failed ", $ftp->message;

$ftp->get("$CustomInteractions_FILE.csv")
or print $out "n $CustomInteractions_FILE.csv get failed ", $ftp->message;

$ftp->get("$Expandables_FILE.csv")
or print $out "n $Expandables_FILE.csv get failed ", $ftp->message;

$ftp->get("$Flights_FILE.csv")
or print $out "n $Flights_FILE.csv get failed ", $ftp->message;

$ftp->get("$MetaData_FILE.csv")
or print $out "n $MetaData_FILE.csv get failed ", $ftp->message;

$ftp->get("$Unique_FILE.csv")
or print $out "n $Unique_FILE.csv get failed ", $ftp->message;

$ftp->get("$Videos_FILE.csv")
or print $out "n $Videos_FILE.csv get failed ", $ftp->message;


$ftp->quit;

# all this below is renaming files to my taste and probably could be
# better done via globbing then applying a regex

$ofd = $Conversions_FILE.".csv";
$nfd = $query_date."_Conversions.csv";
rename($ofd,$nfd) or print $out "n rename $ofd to $nfd failed $! n";

$ofd = $CustomInteractions_FILE.".csv";
$nfd = $query_date."_CustomInteractions.csv";
rename($ofd,$nfd) or print $out "n rename $ofd to $nfd failed $! n";

$ofd = $Expandables_FILE.".csv";
$nfd = $query_date."_Expandables.csv";
rename($ofd,$nfd) or print $out "n rename $ofd to $nfd failed $! n";

$ofd = $Flights_FILE.".csv";
$nfd = $query_date."_Flights.csv";
rename($ofd,$nfd) or print $out "n rename $ofd to $nfd failed $! n";

$ofd = $MetaData_FILE.".csv";
$nfd = $query_date."_MetaData.csv";
rename($ofd,$nfd) or print $out "n rename $ofd to $nfd failed $! n";

$ofd = $Unique_FILE.".csv";
$nfd = $query_date."_Unique.csv";
rename($ofd,$nfd) or print $out "n rename $ofd to $nfd failed $! n";

$ofd = $Videos_FILE.".csv";
$nfd = $query_date."_Videos.csv";
rename($ofd,$nfd) or print $out "n rename $ofd to $nfd failed $! n";


exit();

Atlas (Microsoft Advertising)

Atlas data was not so easy to acquire. I contacted numerous people at Atlas until I reached someone that understood what I was trying to achieve. Together we decided to use a standard report. Atlas allows you to have the report stored on their server via a static url. I pick up that daily report with this script.  Daily monitoring is key: If your data is not complete for some reason, the data (url) will only be available online for a short amount of time so you have to catch it. Atlas reporting has been reliable for the most part but as I stated before, all vendors have data delivery issues at some point. This may be a delay in reporting due to a standard maintenance or simply an error. Anyway, I use Perl once again to pick up the data via the static URL (provided by Altas) and save it to a directory for loading into a mySQL database.

Sample Perl script to get Atlas data

#!/usr/bin/perl
# 200912 ver1.0 chris coluzzi - you may use and modify this script
# freely as long as you retain this header

use strict;
use LWP::UserAgent;
use Date::Manip;

my $date = ParseDate("today");
my $Y = substr( $date, 0, 4 );
my $M = substr( $date, 4, 2 );
my $D = substr( $date, 6, 2 );


my $out_file_name = "Atlas_".$Y."-".$M."-".$D.".csv";
my $path = "/adops/rm/atlas/csv";
my $out;

open ($out, ">", "$path/$out_file_name") or die
"couldn't write $out_file_name: $!";

my $ua = LWP::UserAgent->new;
my $req = HTTP::Request->new(GET => 'https://atlas.atlassolutions.com/ReportMgmt/ReportDownload.action?Type=Group&Id=xoxoxo');
$req->authorization_basic('_userid_', '_password_');

my $page = $ua->request($req);

if ($page->is_success) {
print $out $page->content;
} else {
my $error=$page->status_line;
print "error = $errorn";
print "req = $reqn"
}

close $out;

Here is the code I use to load the above data into a mySQL database.

#!/usr/bin/perl
# 200912 ver1.0 chris coluzzi - you may use and modify this script
# freely as long as you retain this header


use DBI;
use DBD::mysql;
use Date::Manip;
use Text::ParseWords;
use strict;

my $database = "DBname";
my $user = "_userid_";
my $pw = "_password_";
my $dsn = "dbi:mysql:$database:localhost:3306";
my $dbh = DBI->connect($dsn, $user, $pw) or die
"Unable to connect: $DBI::errstrn";

# input filenames descriptors and temp vars
my ($inf, $ifd, $ln, $fld, $sth);
my @flds;

my $date = ParseDate("today");
my $Y = substr( $date, 0, 4 );
my $M = substr( $date, 4, 2 );
my $D = substr( $date, 6, 2 );
my $loaddate = $Y."-".$M."-".$D; # mysql load date

#path to input file location
my $infpath = "/adops/rm/atlas/csv";
$inf = "$infpath/Atlas_".$Y."-".$M."-".$D.".csv";

# Atlas data is one day behind
my $atlas_datadate = DateCalc($date,"-1 days");
$atlas_datadate = substr( $atlas_datadate, 0, 8 );
my $atlas_datadate_Y = substr( $atlas_datadate, 0, 4 );
my $atlas_datadate_M = substr( $atlas_datadate, 4, 2 );
my $atlas_datadate_D = substr( $atlas_datadate, 6, 2 );
my $datadate = $atlas_datadate_Y."-".$atlas_datadate_M."-".$atlas_datadate_D;

my ($advertiser_name, $media_plan_name, $media_plan_number,
$publisher, $site_name, $flighting_type, $package_name,
$cost_method, $net_cost_basis, $contract_start_date,
$contract_end_date, $total_contracted_quantity, $placement_name,
$placement_height, $placement_width, $site_alias, $performance_index,
$estimated_publisher_count, $impressions, $clicks,
$click_through_rate, $cpa_conversions);


open($ifd,"<$inf") || die "n error: $ifd $! n";
while ($ln = <$ifd>) {
#skip header and blank lines
if ($ln =~ /^ / || $ln =~ /^Title/ || $ln =~ /^Publisher /)
{
next;
}
chomp $ln;
$ln =~ s/['&!]//g;
@flds= quotewords(",", 0, $ln);
foreach $fld (@flds) {
$fld =~ s/[,'&!"]//g;
}
$advertiser_name = $flds[0];
$media_plan_name = $flds[1];
$media_plan_number = $flds[2];
$publisher = $flds[3];
$site_name = $flds[4];
$flighting_type = $flds[5];
$package_name = $flds[6];
$cost_method = $flds[7];
$net_cost_basis = $flds[8];
$contract_start_date = $flds[9];
$contract_end_date = $flds[10];
$total_contracted_quantity = $flds[11];
$placement_name = $flds[12];
$placement_height = $flds[13];
$placement_width = $flds[14];
$site_alias = $flds[15];
$performance_index = $flds[16];
$estimated_publisher_count = $flds[17];
$impressions = $flds[18];
$clicks = $flds[19];
$click_through_rate = $flds[20];
$cpa_conversions = $flds[21];

if ($advertiser_name eq "Advertiser Name") {
next;
}

my $stmt = "INSERT INTO atlas (
loaddate,
datadate,
advertiser_name,
media_plan_name,
media_plan_number,
publisher,
site_name,
flighting_type,
package_name,
cost_method,
net_cost_basis,
contract_start_date,
contract_end_date,
total_contracted_quantity,
placement_name,
placement_height,
placement_width,
site_alias,
performance_index,
estimated_publisher_count,
impressions,
clicks,
click_through_rate,
cpa_conversions) VALUES (
"$loaddate",
"$datadate",
"$advertiser_name",
"$media_plan_name",
"$media_plan_number",
"$publisher",
"$site_name",
"$flighting_type",
"$package_name",
"$cost_method",
"$net_cost_basis",
"$contract_start_date",
"$contract_end_date",
"$total_contracted_quantity",
"$placement_name",
"$placement_height",
"$placement_width",
"$site_alias",
"$performance_index",
"$estimated_publisher_count",
"$impressions",
"$clicks",
"$click_through_rate",
"$cpa_conversions")";

$sth = $dbh->prepare($stmt);
$sth->execute();
} #while $ln

close($ifd);

$sth->finish();
$dbh->disconnect();
exit();

Here is the DBcreate statement I used:

create table atlas (
atlas_id int not null auto_increment primary key,
loaddate DATE,
datadate DATE,
advertiser_name VARCHAR(125),
media_plan_name VARCHAR(125),
media_plan_number VARCHAR(125),
publisher VARCHAR(125),
site_name VARCHAR(125),
flighting_type VARCHAR(125),
package_name VARCHAR(125),
cost_method VARCHAR(125),
net_cost_basis VARCHAR(125),
contract_start_date DATE,
contract_end_date DATE,
total_contracted_quantity INT,
placement_name VARCHAR(125),
placement_height VARCHAR(125),
placement_width VARCHAR(125),
site_alias VARCHAR(125),
performance_index DECIMAL(5,5),
estimated_publisher_count INT,
impressions INT, clicks INT,
click_through_rate DECIMAL(5,5),
cpa_conversions DECIMAL(5,5)
);

Unicast

This is perhaps the most tricky and useful of the scripts. I was not able to get someone at Unicast to help me get the data so I went around the fence on this one. I set up a mailbox and created a standard report that would be emailed to this mailbox each day. I use Perl Mail::POP3Client to pop the day’s report then strip off the csv, save it in a directory and load it into a database.

This script works everyday and is reliable.

Sample Perl script to pop a daily report from Unicast

#!/usr/bin/perl
# based on :
# http://disobey.com/detergent/code/leecharoo/leechpop.pl
# 200912 ver1.0 chris coluzzi - you may use and modify this script
# freely as long as you retain this header

use strict;
use Mail::POP3Client;
use MIME::Parser;
use Date::Calc qw( Today Day_of_Week Month_to_Text );

my ($year,$month,$day) = Date::Calc::Add_Delta_Days(Date::Calc::Today(), 0 );

# This is regex used to match the attachment
my $filematch = "${year}${month}${day}";

# this is the regex to match the attachment extension
my $valid_exts = "csv";

# directory to write to
my $savedir = "/adops/rm/unicast/csv";

my $monthText =substr( Month_to_Text($month) ,0 ,3);
my $datematch = "$day $monthText $year";

my $pop = new Mail::POP3Client( USER => "[email protected]",
PASSWORD => "_password_",
HOST => "mail.coluzzi.com",
USESSL => "true",
);

$pop->Connect();
for( my $i = 1; $i <= $pop->Count(); $i++ ) {
my $shouldFetch = 0;
foreach( $pop->Head( $i ) ) {
if ( $_ =~ /^Date:s+/i ){
if ( $_ =~ /$datematch/i ) {
print "matches $_ n";
$shouldFetch=1;
}else {
print "does not match $_ n"
}
}
}
if ( $shouldFetch == 1 ) {
my $msg = $pop->Retrieve($i);
my $parser = new MIME::Parser;
$parser->output_dir( $savedir );
my $entity = $parser->parse_data($msg);
my @parts = $entity->parts;
foreach my $part (@parts) {
my $path = ($part->bodyhandle) ? $part->bodyhandle->path : undef;
next unless $path; $path =~ /w+.([^.]+)$/;
my $ext = $1; next unless $ext;
unless ($valid_exts =~ /$ext/ ) {
print " Removing unwanted filetype ($ext): $pathn";
unlink $path or print " > Error removing file at $path: $!.";
next; # move on to the next attachment or message.
}
print " Keeping valid file: $path.n";
}
}
}

$pop->Close();

# now, jump into our savedir and remove all msg-*
# files which are message bodies saved by MIME::Parser.
chdir ($savedir); opendir(SAVE, "./") or die $!;
my @dir_files = grep !/^..?$/, readdir(SAVE); closedir(SAVE);
foreach (@dir_files) { unlink if $_ =~ /^msg-/; }

In Conclusion

I hope these simple examples help you automate the process of pulling and storing data from rich media vendors.  There are some third part vendors that can help you automate and integrate the data (AdJuster and Operative to name two).  Feel free to use my scripts if you want to try it yourself.  My hope is to maintain these scripts on the AdMonsters site, gather feedback from others in the community, and revise them as needed.  Make sure to monitor these scripts and don’t assume everything has been collected properly. 

 


 

Chris joined About.com in January 2000. In his current role, he manages all aspects of datacenter operations, network engineering, and advertising operations.

Prior to About, Chris served as VP of Enterprise Engineering and Professional Services at GlobalCenter.  On the Professional Services front he was in charge of supporting Yahoo, MTV, The New York Times, The Washington Post, Toys “R” Us, and other prominent web sites.  As VP of Enterprise Engineering, he oversaw GlobalCenter’s top talent specializing in the design and development of reliable and scalable technologies.

Chris’ past positions include Webmaster at Cisco Systems, Software Development Manager at The San Jose Mercury News,  DBA at IBM, and co-founder of I-Storm where, he developed websites and software packages for Egghead Software, Sun Microsystems, Hewlett Packard, Disney, and Intuit.

Chris graduated from the University of California with a Bachelor of Science degree in Computer Science.