What's in the bag? Behind the scenes at vBrownBag.com
- What’s in the bag? Behind the scenes at vBrownBag.com
- Part 2 of “What’s in the bag?” Behind the scenes at vBrownBag.com
- Part 3 of “What’s in the bag?” Behind the scenes at vBrownBag.com
- Part 4 of “What’s in the bag?” Behind the scenes at vBrownBag.com
- Part 5 of “What’s in the bag?” Behind the scenes at vBrownBag.com
- Part 6 of “What’s in the bag?” Behind the scenes at vBrownBag.com
- Automating the vBrownBag with AWS Serverless
OK, so now that I’ve got this blog dusted off, decided on a new direction and archived all of the old posts, let’s get cracking on part 2 of this series. I’ll be going into very specific detail about what the new meatgrinder/automator process needs to do, and then branch out into how each of those steps are accomplished in future posts.
Quick note
Before we get started, I’d like to talk about what’s going on in a general sense. The majority of the new meatgrinder functionality will be done in PowerShell. Why PowerShell? Well, it’s what Al wrote for the previous iteration of the meatgrinder and I want to continue using it because I love it. Most of the PowerShell Internet calls will either be Invoke-RestMethod or Invoke-WebRequest. The AWS calls will use cmdlets from AWS.Tools.Common and AWS.Tools.S3 for reading/writing/deleting S3 objects. Output logging writes to AWS Cloudwatch, due to the way that Lambda handles output. Packaging the PowerShell function for Lambda will be covered later.
Meatgrinder process
- After a vBrownBag recording is live-streamed, the YouTube recording is manually downloaded locally for editing & fancy on-screen graphics, YouTube shorts creation, etc.
- A file containing metadata such as video title, description, tags, etc. and an .mp4 file with same name (but different extension) are uploaded to a dedicated AWS S3 bucket via AWS CLI or the AWS S3 PowerShell module.
- The Lambda function is invoked via the AWS CLI along with a JSON payload that specifies the video.mp4 & video metadata file, and the function proceeds to do a number of things:
- Verifies both the video file & metadata file exist in the S3 bucket
- Parses the metadata to create variables with the title, description, tags, etc.
- Adds on a promotional content text block to the description, refreshes the OAuth token and then
POST
s the video to the YouTube API videos:insert endpoint, which responds in JSON. The unique YouTube video id is parsed from this response. Note: I’ll cover the OAuth token component in a future post about PSAuthClient. - Lambda asks for more information on that video id from YouTube’s videos:list endpoint which responds in JSON with video id, title, excerpt, long description, thumbnail details, publishedAt (and more) and saves a copy of the media file (named as $videoId.mp4) & $videoId.jpg thumbnail to our public S3 bucket.
- I really just want the YouTube excerpt and publishedAt values versus reusing the original metadata, as I want the blog post & RSS feed entry to have the same excerpt & timestamp as it makes the blog post look cleaner in case of a longer description, or if I want to backdate the blog post in case this step is done later. An example of this would be the “catch-up” process I went through to get the website & RSS feed caught up on the last 6 months of the YouTube channel when the former iteration of the meatgrinder script was inoperable.
- Lambda deletes the working bucket upload, as processing is done and any video & thumbnails in our public bucket should avoid name collisions anyway by way of YouTube’s naming conventions.
- Lambda creates the vBrownBag.com blogpost from the YouTube details using the original description (without the promotional text block), and
POST
s to vBrownBag.com using the WordPress REST API. More on that in a future post. - Lambda then
POST
s the YouTube thumbnail image to be used as the featured media (the post thumbnail), then links the featured media to the post. The WordPress REST API doesn’t allow side-loading the media, so we have to create the post, upload the media, then associate the post & media. - Finally, Lambda grabs the Apple podcast RSS XML file, saves a backup to S3, adds the latest post to the top of the XML body, and sends it back to where it lives. I’ll show more on that process later, as it’s also pretty nifty. If you’re wondering “hey, what about us Android folks?!” the answer is that Google has decided YouTube will be its podcasting source too, so it’s effectively built-in to the vBrownBag YouTube channel.
Here’s a rather simplified flow that I made with Lucidchart. It’s not 100% exact, but it’s enough to get the idea across. In my mind, I’d like to actually map out the POST
s and responses, but that feels a bit too… extra. 🙂

That’s all for now. The next post in this series will be a brief overview of the development environment, change management, and the tools necessary to make all of this work.