Tag Archives: aws

Email Open Tracking using AWS CloudFront

At Plum District, we send out a lot of marketing emails – on some days more than 5 millions daily. It’s a big part of our business, and we spend a lot of our time tracking open and send rates, and running analytics on this data. Luckily, SendGrid is able to funnel all that data via their Event Notification API where all the events (processed, opened, clicked, etc) are sent to us. We recently acquired another company, but unfortunately they send their emails via Amazon SES which doesn’t have any tracking.

In this article, I’ll discuss an innovative way to track email open rate using Amazon CloudFront. Well, the basic mechanism is really just pixel tracking. CloudFront provides detailed access log that’s dumped directly into S3. Hence, we can host the pixel in CloudFront, put the pixel in the email (plus any optional HTTP params that you want to track) and be able to track how many times that pixel is loaded, and finally track open count plus a bunch of other information including demographic, most active timeframe, etc.

Here are the steps:

  1. Create an S3 bucket if you don’t have one to store the pixel. In this case, I’ve put the pixel under s3n://plum-mms/images/1.gif. Feel free to borrow the pixel here. It’s just a 1×1 transparent gif. Also, make sure that bucket has the appropriate permission, i.e. Everyone → Open / Download.

  2. Create another S3 bucket for logs. In our case, we’ve created a bucket named plum-mms-logs.

  3. Configure a CloudFront distribution using the S3 bucket you created in step 1. Make sure you have the following configured when creating:

    Logging
    On. This tells CloudFront to enable logging.
    Cookie Logging
    Off. We don’t really need that information.
    Log Bucket
    This tells CloudFront where to dump the access log. In my case, I selected plum-mms-logs.s3.amazonaws.com which corresponds to the bucket I created in step 2.
  4. Capture the domain name associated with your new CloudFront distribution. In our case, it’s d2x9v85k2ohcuy.cloudfront.net. You can test that http://d2x9v85k2ohcuy.cloudfront.net/images/1.gif returns the GIF file in a browser.

  5. Insert the pixel in your email template. We want to capture who has opened an email, so we’ve included the subscriber ID, as well as the email category as GET parameters. Here’s a bit of Ruby code to generate the pixel:

    
    pixel_tracking_url = nil
    if subscriber && category
      pixel_tracking_url = "http://d2x9v85k2ohcuy.cloudfront.net/images/1.gif?sid=#{subscriber.id}&category=#{category}"
    end
    
    
  6. And in your email template (.erb file), you can add the code anywhere in the email:

    
    <% if @pixel_tracking_url %>
      <img src="<%= @pixel_tracking_url%>" width="1" height="1" alt=""/>
    <% end %>
    
    

Now we are ready to roll! You can send the email to a few test email accounts, and see if you are getting the logs. It usually takes a few hours for CloudFront to push the log files out to your logging bucket.

HOWTO: Deploy a fault tolerant Django app on AWS – Part 2: Moving static and media files to S3

In the last article, I discussed our attempt to remove points of failure in our infrastructure, and increase redundancy. We moved our single database instance running locally to RDS where fault tolerance is built-in through their multi-zone offering.

In this article, I’ll continue this journey by moving our Django static and media files from local file system to S3. Static files are files in the [app]/static folder where typically Javascript, CSS, static images and 3rd party Javascript libraries are stored. Media files that are user generated, through the use of FileField and ImageField in Django model, e.g. profile picture of a user, or a photo of an item. By default, when you created a Django application using the standard “django-admin.py startproject”, all the media files are stored in the [app]/media folder and static files in the [app]/static folders. The locations are controlled by the following parameters in settings.py:


# Absolute filesystem path to the directory that will hold user-uploaded files.
# Example: "/home/media/media.lawrence.com/media/"
MEDIA_ROOT = ''

# URL that handles the media served from MEDIA_ROOT. Make sure to use a
# trailing slash.
# Examples: "http://media.lawrence.com/media/", "http://example.com/media/"
MEDIA_URL = ''

# Absolute path to the directory static files should be collected to.
# Don't put anything in this directory yourself; store your static files
# in apps' "static/" subdirectories and in STATICFILES_DIRS.
# Example: "/home/media/media.lawrence.com/static/"
STATIC_ROOT = ''

# URL prefix for static files.
# Example: "http://media.lawrence.com/static/"
STATIC_URL = '/static/'

So, why do we need to move these files out of the EC2 local file system? It’s a pre-requisite to spinning up multiple EC2 instances that host the Django application. Specifically, we can’t have media files sitting in two locations. For example, when a user updates his or her profile picture, the POST request goes to one server and hence the new image would be stored in that server’s local file system, which is bad because the other app server won’t have access to it (unless you setup some shared folder between the instances – which is what’s typically done before Jeff Bezos gave us S3). By moving the static and media files to S3, both servers will be using the same S3 end-points to store and retrieve these files. Another HUGE plus is that the web servers (apache or nginx) don’t have to handle these static file requests anymore, and the disk and network load on the web servers will be drastically reduced.

Enough talking. First thing’s first. We need to download and install django-storages and boto.


pip install django-storages boto

Now, create a S3 bucket. This part is easy. Log into AWS console, click over to S3 and click on Create Bucket. Give it a name. For this example, we’ll use “spotivate”. All our static and media files be accessed through http://spotivate.s3.amazonaws.com/static/... and http://spotivate.s3.amazonaws.com/media/... respectively.

Also, we need to get the AWS Key and Secret which boto needs to access S3. You can find that from your AWS Security Credentials page.

Now we have all the info to change Django settings. The instructions here are loosely based on various articles I’ve read, but Phil Gyford’s article has been most helpful. Following his instructions, I first created spotivate/s3utils.py with the following content:


from storages.backends.s3boto import S3BotoStorage

StaticS3BotoStorage = lambda: S3BotoStorage(location='static')
MediaS3BotoStorage = lambda: S3BotoStorage(location='media')

Then, in settings.py, I added storages as one of the INSTALLED_APPS and a bunch of other variables that tells Django where to put and read media and static files:


INSTALLED_APPS = (
	...
	...
    'storages'
)

...
...

###################################
# s3 storage
###################################

DEFAULT_FILE_STORAGE = 'spotivate.s3utils.MediaS3BotoStorage' 
STATICFILES_STORAGE = 'spotivate.s3utils.StaticS3BotoStorage' 

AWS_ACCESS_KEY_ID="xxxxxxxxxx"
AWS_SECRET_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxx"
AWS_STORAGE_BUCKET_NAME = 'spotivate'

S3_URL = 'http://%s.s3.amazonaws.com/' % AWS_STORAGE_BUCKET_NAME
STATIC_DIRECTORY = '/static/'
MEDIA_DIRECTORY = '/media/'
STATIC_URL = S3_URL + STATIC_DIRECTORY
MEDIA_URL = S3_URL + MEDIA_DIRECTORY

Voila. We are almost done. To upload all the static files to S3, run the following command:


python manage.py collectstatic

This will copy all the files in your current static folder to S3. What about media files? We need to upload that at least once to S3. Why only once? Because after the settings above is deployed, users who update their profile pics will be posted to S3. I found a great python package call boto-rsync that does the job beautifully.


pip install boto_rsync
boto-rsync media s3://spotivate/media -a [AWS_ACCESS_KEY_ID] -s [AWS_SECRET_ACCESS_KEY]

Verify in AWS console that all static and media files have indeed been copied to S3. Deploy the server, and hit a page. You should see that all references to Javascript, CSS and media files all point to S3.

It actually didn’t turn out so easy for me the first time around. I found that many CSS are still served from local file system. After looking at the template, I realized that I had this in the template:


<link href="/static/web/bootstrap230/css/bootstrap.css" rel="stylesheet" type="text/css" charset="utf-8">
<link href="/static/web/jcarousel/css/style.css" rel="stylesheet" type="text/css" charset="utf-8">
<link href="/static/web/css/spotivate_new.css" rel="stylesheet" type="text/css" charset="utf-8">

I am not using the Django “staticfiles” functionality properly. I had effectively hard-coded the static path, when I should be using the static template tag instead. The above line should be changed to:


{% load staticfiles %}
...
...
<link href="{% static "web/bootstrap230/css/bootstrap.css" %}" rel="stylesheet" type="text/css" charset="utf-8">
<link href="{% static "web/jcarousel/css/style.css" %}" rel="stylesheet" type="text/css" charset="utf-8">
<link href="{% static "web/css/spotivate_new.css" %}" rel="stylesheet" type="text/css" charset="utf-8">

The server is now functioning properly, but we are not done yet. What if we need to modify Javascript? How do changes get copied to S3 during deployment? This doc provides good instructions on this topic.

Now, with the static and media files moved over the S3, and database moved over to RDS, I’ve effectively remove all state from app server. Now I can spin up another EC2 instance, drop my code there and hence spreading all the traffic to two servers. If one goes down, we are still in business! And did I mention that the page loads a lot faster too?

HOWTO: Deploy a fault tolerant Django app on AWS – Part 1: Migrate local MySQL to AWS RDS

For a while, Spotivate was running on a single EC2 instance. Everything was in it — MySQL, Django, static files, etc. Yes, we know this is a terrible setup. Single point of failure, bad performance, etc. Here comes the excuses. We had better things to do, like customer development, sales, design, product development, etc. We had no time for ops! Plus, our traffic wasn’t really that high especially in the beginning. Our CPU / IO load was low. And we knew we can fix things fairly easily. Then one day, our EC2 instance went down for half an hour. Ooops! Called AWS support. They had a disk failure. Our last snapshot was a day old. So our site was down this whole time.

We figured we had to do it right. And EC2 makes it super easy. Our goal:

  • Remove all single points of failure, thus making the system fully fault tolerant.
  • As a result, the response time should go up, especially when under load.

Here’s the plan:

In this article, I’ll talk about the steps we took to move our MySQL to RDS.

If you don’t know what RDS is, read more about it here. Basically it’s AWS’s version of database server. RDS comes loaded with features. Here’s summary of what’s relevant:

  • Easy to deploy via the Management Console or command line.
  • Automatic backup (you get to choose how many days and when).
  • Multi-availability zone deployment means AWS automatically creates a primary DB instance and synchronously replicates the data to a standby instance in a different Availability Zone, thereby removing this as a single point of failure.
  • Replication that allows you to create read-only replicas. This is especially valuable for Spotivate, since our personalized email server put a heavy load on the DB. By having this, the performance of our website won’t be affected while we send out our weekly emails.

Well, let’s get on with it.

Step 1: Goto your management console and select RDS

Launch Database Instance

 

Step 2: Find a database server that fits your bill. In our case, MySQL.

Select database type

 

Step 3: Here’s where you pick the MySQL version and the instance size.

RDS Step 3

Multi-AZ Deployment Select “Yes” which creates a standby instance in a different AZ. That’s the whole point of this article, right?
Allocated Storage Choose a storage size that’s appropriate. Go small, as you can easily upgrade later with minimal down time. Generally, estimate enough for 3 months down the road.
DB Instance Identifier This is just the prefix to the public DNS.
Master Username Your database user name, typically “root”
Master Password Your database root user password

 

Step 4: Here you specify the database name, port, etc.

You also get to create (or assign) a database security group for this database. This is a little different from the EC2 security group. For database security group, you assign which EC2 security group to use. And any EC2 instance that belongs to that EC2 security group has access to the database. By default, everything else is turned off including ping. For more info, visit here.

RDS Step 4

 

Step 5: Backup Settings

Here, you specify the backup retention period, and when to backup. Make sure your backup window and maintenance window don’t overlap.

RDS Step 5

 

Step 6: That’s it. Review and Launch.

RDS Step 6

 

Step 7: Test it out.

After the DB has been launched (takes several minutes – enough time for coffee), you can find the public DNS from the detail page. This machine is accessible externally and within EC2. However, the security group by default prohibits any external access to the database server. Only EC2 instances that belong to the security group have access. From my web server, I can use my typical “mysql” command to connect to the new RDS instance.

RDS Step 7

 

Step 8: Import.

Our database is fairly small, so we can just dump the database and pipe it to the new instance. Here’s a fun command that you can use (make sure you stop your web server first to avoid consistency issue).

mysqldump [your current db] | mysql --host=[rds host name] --user=root --password [root password]

That’s it! All you need to do now is change your Django settings to use the new database instance. Bring down your local MySQL and restart your Django server to see if everything is running properly. If so, change chkconfig to keep the local MySQL from restarting.

Next time, I’ll talk about the migration of our static files to S3.

HOW TO: Install WordPress on Amazon EC2

First thing’s first. I am kinda new to WordPress. (Yes, I said I am behind in technologies in my previous post, didn’t I?) I setup a WordPress blog for Spotivate a few months ago. Today, I set up my own WordPress on an Amazon EC2 micro instance. Here are the steps I took.

Christophe Coenraets has an amazing tutorial on how to install WordPress on EC2 here. So I am not going to repeat it, since it was really that simple and took less that 5 minutes as advertised.

A few comments:

  • I didn’t create a small instance. Instead, I chose a (Free) micro instance. Why? Well, it’s free. Also, micro will do for now given I have no traffic. Plus, I want to show you how to migrate / upgrade from micro to small later!
  • There are a few typos in the tutorial:
    • mysql_secure_Installation should be mysql_secure_installation
    • tar -xzvf latest.tar.gzcd should be tar -xzvf latest.tar.gz
  • If you followed the instructions given by Christophe, the owner of the blog folder is “root”. Apache runs as “apache” by default. You will have problem uploading plugins and media files later on, since apache != root. Change the owner of /var/www/html/blog to apache:apache by running this command:
    sudo chown -R apache:apache /var/www/html/blog

Now that the URL http://www.jorgechang.com/blog is up, I want to make it so that http://www.jorgechang.com brings the user there as well. There are a few ways I can do that.

  1. Move my blog folder to root as described here. Yes, it looks complicated but not really. Just a matter of moving the blog directory and reconfiguring a few things like Permalinks. I’ve decided against that because I want to keep the URLs of my blog posts under /blog/, e.g. /blog/hello-world. I have other projects in mind, and want freedom over my URL namespace in the future. REMEMBER: Once your blog post is published, you need to make sure that the URL works forever, as other people will likely link to your post if your post is any good. You need to maintain backward compatibility whenever you change your URL structure, so better to keep all blog related activities isolated to /blog.
  2. Setup HTTP redirect so that the end users are redirected to http://www.jorgechang.com/blog.

I am going with redirect method. Now, there are difference kinds of redirect. SEOmoz has an excellent article here that describes HTTP Redirection in detail. Basically, there are three main types of redirect:

  1. 301 (Moved permanently)
  2. 302 (Moved temporarily)
  3. Meta Refresh

Option 3 requires the most work for everyone since I need to write an index.html with meta tag, and the end user’s browser need to do more work (load the page, parse, execute meta refresh, etc) and hence slower.

The difference between 1 and 2 is very subtle, and mainly impacts how search engines crawl and index your pages. 301 is the most suitable option, since I don’t plan on having anything other than my blog on my home page in the foreseeable future.

Edit /etc/httpd/conf/httpd.conf and stick this block of code to the end of the file:

RewriteEngine On
RedirectMatch 301 /index.html /blog

Restart Apache by running this command:

sudo service httpd restart

Now, the URLs for my blog posts look something like /blog?p=1. It doesn’t look very pretty and also affects SEO. Here’s how you can make it look more like /blog/hello-world.

Again, edit /etc/httpd/conf/httpd.conf and look for the following blocks of code and change AllowOverride from None to All. Restart Apache HTTP server afterwards.

<Directory />
Options FollowSymLinks
AllowOverride All
</Directory>
<Directory "/var/www/html">
Options Indexes FollowSymLinks
AllowOverride All
Order allow,deny
Allow from all
</Directory>

Create an .htaccess file in your blog folder. If you follow the instructions outlined by Christophe Coenraets, then your blog folder would be /var/www/html/blog. A lot of examples online tell you to create an empty .htaccess file, chmod it with 666 permission, and let WordPress admin handle the changes. I highly recommend against that due to security risk. Most people forget to change it back to 644. Instead, simply create the file and with the following content:

<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /blog/
RewriteRule ^index\.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /blog/index.php [L]
</IfModule>

Now, go to your WordPress admin. Navigate over to Settings >> Permalinks. Here’s what you should see.

Wordpress Permalink Settings

Pick one that is suitable for you. Yoast suggested that it’s best to stick with “Post name” to give your post a timeless look, so I follow his recommendation. Click Save Changes and voila, your blog post URLs are now readable and timeless.

Next: Themes, plug-ins, custom CSS, nav bar, and more! (Did I say the more I learn, the more behind I feel?)