James Thorpe

Uploading to S3

Dec 10, 2018 AWS Site

In my last post, I looked at how I configured and started using Wyam to build my new site. After finishing up that post and getting things working locally, I then figured out how to get things served from AWS.

I created a new S3 bucket, with the same name as my domain (james.pawsforthorpe.co.uk), and configured the properties on it to include static website hosting. This gives it an AWS domain - in my case, http://james.pawsforthorpe.co.uk.s3-website.eu-west-2.amazonaws.com/. A quick CNAME entry in my DNS settings later, and I was up and running. At this point, I was still manually uploading files through the AWS console website - this now needed to change.

Being a .NET house, we use the AWS Powershell Tools at work to do some programmatic things - I didn't see any reason to do any different here. First job was to create an IAM user to allow programmatic access. Best practices are to limit permission to exactly what's needed.

To keep things layered properly, I went through several steps. The first one was to create a new policy to grant access to write files into the bucket. Nice and simple:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:PutObject"
            "Resource": "arn:aws:s3:::james.pawsforthorpe.co.uk/*"
        }
    ]
}

I then created a Group that only has that policy attached, then created an IAM user with an access key and secret key within that group. The credentials then need to be stored locally for use - I'm using the standard NET Sdk credential store, so a simple:

Set-AwsCredential -AccessKey <access> -SecretKey <secret> -StoreAs JamesTheThorpes

sorted that out. Next up was writing a script to grab the Wyam output and push it up to S3, enter push.ps1:

#Switch to use the previously configured profile
Set-AWSCredential -ProfileName JamesTheThorpes

#Figure out where we're running from
$basePath = (Get-Location).Path + "\output\"
Write-Output $basePath

#Get all the files
Get-ChildItem "output" -Recurse -File |
ForEach-Object -Process {
    #And push each one
    $file = $_.FullName
    $keyname = $file.Replace($basePath, "")
    Write-Output "Uploading $keyname"
    Write-S3Object -Region eu-west-2 -BucketName james.pawsforthorpe.co.uk -File $file -Key $keyname -CannedACLName public-read
}

Pretty easy, right? Let's try running it.

Write-S3Object : Access Denied

Doh. A bit of experimentation and reading led me to an additional action needed in the IAM Policy: s3:PutObjectAcl. I added it and tried again. Same error.

Just to prove a point, I removed the -CannedACLName public-read from the command. The files now get uploaded. But without public read permissions, the existing website is now offline. Oops.

More reading, and checking of permissions etc. On an "are you sure?" sort of thought, I added back in the -CannedACLName argument and tried again - works now. Guess the additional action in the policy took a few minutes to synchronise!

So there we are, writing a new post now consists of creating a markdown file in /posts, running Wyam -p -w while I work on it, then when I'm happy just running ./push to upload it to S3.

While I was at it, I've added a few meta tags to get nicer link previews in the likes of Twitter and Facebook.

Back to posts