September 3, 2017

free AWS Certs on non-static sites

In this post I will cover how to use AWS to host dynamic content with a free AWS SSL cert. Not much background, the reason I did this is I had a static website I wanted to move to having some light dynamic content without having to switch up hosting too much. Doing this with static content is easier and mostly well known (just upload to S3 and cloudfront, good to go):

Route53 SSL->AWS CloudFront -> S3 bucket set to hosting

Even with purely static content, this worked OK until I tried clicking on a link ( which S3 blew up as a bucket identifier, since it does NOT have a file structure. I immediately decided to scrap that idea and just do some actual webhosting.


  • Amazon AWS account (duh)
  • domain managed by route53
  • ec2 or beanstalk instance running your stuff


  • Free SSL certs
  • Free wildcard SSL certs (!)
  • automatic management of certs (easier than even letsencrypt certbot!)
  • potentially free

New approach

You cant set a service alias in route53 directly to an EC2 instance (I didnt check EBS but..), The trick is essentially this:

  • Setup Route53 service alias to elastic balancer
  • Have elastic balancer group have a single target, the EC2 instance (haha)
  • profit

Route53 SSL -> EC2 elastic load balancer -> target group forwarding BOTH HTTP And HTTPS to EC2 HTTP port 80

The load balancer listeners look something like this:

ELB listenersELB listeners

The web’ target group port is set to 80, protocol HTTP and has our EC2 instance in it. If you care about health checks, you will need to tweak it to accept a 301 or https.

On the EC2 a normal nginx installation serves up the content normally. Only a minor tweak for redirecting non https is added:

    # in your server block:
    proxy_set_header X-Forwarded-Proto $scheme;
    if ($http_x_forwarded_proto != 'https') {
        return 301 https://$host$request_uri;

With all that done I wrote a helper script to build the site and shoot it to the webroot over scp. If you’re using debian/ubuntu based distro, nginx will be using www-data, so you can set permissions like so:

    sudo chown -R "$USER":www-data /var/www/html
    sudo chmod -R 0755 /var/www/html

Currently I’m still just using hugo to mostly kick out a static site. To easily rebuild the hugo site and shoot it up:


    #pull these values out if you want to run ssh commands after scp (permissions etc)

    scp -i $key -r public/* $host:/var/www/html

Hope this saves anyone else that needs to similar some time and money.

AWS DevOps SSL site

Previous post
Cross platform gradle npm builds This is a quick and dirty one, mostly so I dont forget to write down something painful I had to do recently. If you use node npm in a gradle build
Next post
google keep UX sucks At a previous job many years ago I used evernote heavily, but only in a simple way , creating new plaintext notes in notebooks. I figured I