Dark Souls CRUD Arena - The Prisoner Approach

After three weeks of toiling away with raw JavaScript, I have shipped the Prisoner Approach. This is the first of four iterations in the Dark Souls CRUD Arena. Context for what motivated this project is available here. Here is the link to the repo: dark-souls-crud-arena-prisoner Tech stack details: Frontend: Vanilla JavaScript, HTML, CSS Backend: Node.js, SQL, Postgres Deployment: AWS using EC2 + RDS This stack was decided ahead of time, with the expectation that I'd get bumps and bruises along the way. One of my goals in building the same app four different ways is to gain a deeper understanding of why frameworks like Vue.js, React, Express.js, and Next.js exist. What problems are they solving for us? I figured the best way to start was to build my frontend as if it were the early 2000s — pre-jQuery — using only JS/HTML/CSS. As for the backend, using only Node.js would help me better appreciate the value that frameworks like Express.js provide. Praise for KISS (Keep It Simple Stupid) KISS is typically referenced when talking about designing software, but I think it equally applies to overall project scope. Often times a developer will bite off more than he/she can reasonably chew, leading to a negative feedback loop of getting demoralized. A key benefit of having had industry experience and shipping a small video game is developing battle tested intuition for the minimum scope needed to achieve project goals. In this case my goals are not IP related, but strictly learning oriented. I intentionally wanted to keep the scope of the Dark Souls CRUD Arena very limited with just two resources -- heroes and equipment. Project Considerations Get something to prod immediately? I knew going into this project there would be a lot of feature implementation learning using only raw JS on the client. After weighing the option of deploying my app the instant I had anything working, I decided not to bog myself down with deployment issues off the rip. Thanks to Claude Sonnet 3.7, deploying to AWS was mostly painless, mostly. AWS has always been this blob behind frosted glass for me and the Prisoner Approach taught me that using EC2 with RDS is actually approachable. There are PaaS companies that abstract away needing to touch AWS directly, but for purposes of learning, I wanted to deploy in the most annoying way possible. I considered using DigitalOcean, but their lack of a free tier made me choose AWS instead. As for actually getting everything up and running, I'll detail that as the final section of this retrospective. Whether to use Docker for local dev In past personal projects, and in my most recent role, I've used Docker for dependency management to avoid the "works on my machine" scenario. I also just like keeping dependencies off my machine, but for this project I opted not to use containers given my lack of dependencies. I used Homebrew for all my needs :). Nuts and Bolts For each section, I'll provide an overview of what I did and the key learnings I gained. I'll number them in the order I encountered each concept. I'm not going to get overly fussy about making sure the learnings perfectly match the section I'm covering, as I want it to invite you into the organic process of how things unfolded for me. You'll notice that I don't always respect casing for key learning sections. Deal with it. Data Layer The first order of business was to set up a data layer which included the schema and local database instance for making updates directly via SQL queries. I selected Postgres because it is what I've used in the past in school projects, but hadn't touched it in a long time. It's also open source, which I try to bias towards whenever possible. I installed Postgres via Homebrew. I also installed PgAdmin, but didn't end up using it. I preferred interacting with the database via the CLI. The same applied for my prod RDS instance. key learnings: installed postgresql via homebrew along with pgAdmin used psql command in the CLI to access my database locally used brew services start/stop postgresql to read/write from disk on my machine. It was eye-opening to actually save data to my M1 Macbook and gave me appreciation for cloud servers. realized you don't "need" to run migrations for updating your schema, that's just a good idea and comes baked in with frameworks like Ruby on Rails (what I was taught to use at Turing many years ago). Instead, you can just write to a raw .sql file in your API layer and run that locally with psql -U username -d database_name -f file.sql. when working with SQL, you use a string literal for making a field obey casing like so: "minLevel" INTEGER NOT NULL,. If you don't apply a string value, SQL will see minLevel as minlevel. creating a new Pool instance using pkg from Postgres gives a way of establishing a connection between our database and a Node.js server. We pass an object to the constructor of a Pool instance telling i

Apr 7, 2025 - 01:22
 0
Dark Souls CRUD Arena - The Prisoner Approach

After three weeks of toiling away with raw JavaScript, I have shipped the Prisoner Approach. This is the first of four iterations in the Dark Souls CRUD Arena. Context for what motivated this project is available here.

Here is the link to the repo: dark-souls-crud-arena-prisoner

Tech stack details:

  • Frontend: Vanilla JavaScript, HTML, CSS
  • Backend: Node.js, SQL, Postgres
  • Deployment: AWS using EC2 + RDS

This stack was decided ahead of time, with the expectation that I'd get bumps and bruises along the way.

One of my goals in building the same app four different ways is to gain a deeper understanding of why frameworks like Vue.js, React, Express.js, and Next.js exist. What problems are they solving for us? I figured the best way to start was to build my frontend as if it were the early 2000s — pre-jQuery — using only JS/HTML/CSS. As for the backend, using only Node.js would help me better appreciate the value that frameworks like Express.js provide.

Praise for KISS (Keep It Simple Stupid)

KISS is typically referenced when talking about designing software, but I think it equally applies to overall project scope. Often times a developer will bite off more than he/she can reasonably chew, leading to a negative feedback loop of getting demoralized. A key benefit of having had industry experience and shipping a small video game is developing battle tested intuition for the minimum scope needed to achieve project goals. In this case my goals are not IP related, but strictly learning oriented. I intentionally wanted to keep the scope of the Dark Souls CRUD Arena very limited with just two resources -- heroes and equipment.

Project Considerations

Get something to prod immediately?

  • I knew going into this project there would be a lot of feature implementation learning using only raw JS on the client. After weighing the option of deploying my app the instant I had anything working, I decided not to bog myself down with deployment issues off the rip. Thanks to Claude Sonnet 3.7, deploying to AWS was mostly painless, mostly. AWS has always been this blob behind frosted glass for me and the Prisoner Approach taught me that using EC2 with RDS is actually approachable. There are PaaS companies that abstract away needing to touch AWS directly, but for purposes of learning, I wanted to deploy in the most annoying way possible. I considered using DigitalOcean, but their lack of a free tier made me choose AWS instead. As for actually getting everything up and running, I'll detail that as the final section of this retrospective.

Whether to use Docker for local dev

  • In past personal projects, and in my most recent role, I've used Docker for dependency management to avoid the "works on my machine" scenario. I also just like keeping dependencies off my machine, but for this project I opted not to use containers given my lack of dependencies. I used Homebrew for all my needs :).

Nuts and Bolts

For each section, I'll provide an overview of what I did and the key learnings I gained. I'll number them in the order I encountered each concept. I'm not going to get overly fussy about making sure the learnings perfectly match the section I'm covering, as I want it to invite you into the organic process of how things unfolded for me.

You'll notice that I don't always respect casing for key learning sections. Deal with it.

Data Layer

The first order of business was to set up a data layer which included the schema and local database instance for making updates directly via SQL queries. I selected Postgres because it is what I've used in the past in school projects, but hadn't touched it in a long time. It's also open source, which I try to bias towards whenever possible. I installed Postgres via Homebrew. I also installed PgAdmin, but didn't end up using it. I preferred interacting with the database via the CLI. The same applied for my prod RDS instance.

key learnings:

  1. installed postgresql via homebrew along with pgAdmin
  2. used psql command in the CLI to access my database locally
  3. used brew services start/stop postgresql to read/write from disk on my machine. It was eye-opening to actually save data to my M1 Macbook and gave me appreciation for cloud servers.
  4. realized you don't "need" to run migrations for updating your schema, that's just a good idea and comes baked in with frameworks like Ruby on Rails (what I was taught to use at Turing many years ago). Instead, you can just write to a raw .sql file in your API layer and run that locally with psql -U username -d database_name -f file.sql.
  5. when working with SQL, you use a string literal for making a field obey casing like so: "minLevel" INTEGER NOT NULL,. If you don't apply a string value, SQL will see minLevel as minlevel.
  6. creating a new Pool instance using pkg from Postgres gives a way of establishing a connection between our database and a Node.js server. We pass an object to the constructor of a Pool instance telling it things like what host and port we want to use for our web server.
  7. a brief side-quest on ports: i've never taken the time to understand what a port is and figured this would be as good a time as any. the host is the IP address or domain, but I wasn't sure what a port actually was. it's basically like a door to a room in a house. if the room is occupied, you can't access it. the house in this case is a computer. for example, we could use port 5432 for pointing to our database as is common practice for Postgres databases. HTTP typically uses port 80, while HTTPS uses 443, etc. if you try to run multiple services on the same port, that reference the same IP address it won't work. however, we can reference that same port number on both an EC2 server and our own machine since they have separate IP addresses. lastly, i learned you can have multiple ports using the same IP address. this is how you can have Postgres running on your local machine, along with Docker, a web server like Node, and any number of other services. it left me thinking an IP address effectively serves as a namespace, though i'm sure this is lacking nuance.

Node.js

I've used Node.js with Express at work, and is the ol' tried and true pairing when using JavaScript on the backend, without serverless functions, but for this project I wanted to just use Node.js to see what all the fuss was about in using those extra abstractions. I missed those abstractions as it turned out.

Using ESM syntax

  • At my previous job, when building with Node.js/Express, I took for granted that I was able to use import/export syntax over require -- i.e. CommonJS, which just seems ugly in comparison. I had basically all but forgotten about dear old require. Well as it turns out, even in Node.js v22.14.0, you don't get ESM syntax out of the box. There are a couple ways to handle this, but the easiest way I found was just to use "type": "module" in my package.json file. This tells Node.js to treat all JavaScript files in our project as ES6 modules by default.

My server.js file

  • This is where the pain of using raw Node.js reared its head. See express parses JSON for us. It handles knowing whether to serve a file or fire off an endpoint for a request coming in. Oh sweet summer child, we have to tell our Node.js project how to parse our API requests and how to serve files by hand. As shown by this commit, you can see how much code we need to write just to get basic behaviors that would otherwise come baked in with APIs like Express. You'll notice I was adding endpoints directly to my server.js. Fear not. This was gross to me too and I promptly broke out each set of requests by resource type. I also ended up breaking out the helper functions into a utils.js file. By the end of the project, server.js was responsible for running the web server and checking if we needed to serve a file or handle a request as shown here.

key learnings:

  1. __ at the start of a variable name in JavaScript communicates that this is a pseudo private variable that shouldn't be accessed outside the file. in this case __filename and __dirname.
  2. createServer API from node is what actually runs our web server and takes in requests and sends out responses. everything else is just an abstraction of our choosing with this function as the gatekeeper to our API.
  3. in raw Node.js HTTP module, you do need to explicitly call res.end() to signal to the server that all response headers and body have been sent and that the server should consider the message complete. if you don't call res.end(), the request will hang and eventually time out. i found out the hard way when making calls from Postman to the first endpoint i made, not understanding why the request was just hanging. this was the source of the problem.
  4. the same thing for res.setHeader. If we don't send this through explicitly, the browser has to guess the content type we are sending. the request/response cycle will still work, however.
  5. since a hero wears a piece of equipment, i had the decision of whether or not i wanted to split out the file folder structure to be hero/equipment/routes.js or just keep hero/routes.js and equipment/routes.js. i opted to keep them separate, top level, route files with nested resource endpoints inside hero/ to keep things simple. if our app expanded in scope, this approach of having separate top-level routes provides more flexibility as the API grows.
  6. DELETE FROM ; removes all rows from the table, but DOES NOT reset the IDs for new rows being written to the database; whereasTRUNCATE RESTART IDENTITIES blows away the table in our database instance AND reverts the ID for each row we add starting back at 1.
  7. for serving files up to the client, URL parsing was required! regex was my friend here.
  8. got a lot of practice with sending prop status codes. i didn't know a 201 was used for resource creation requests.
  9. got in a lot of practice with writing request validation. i created a helper file for handling validation for each resource type. it's simple, but works.
  10. it is OK and in fact good practice to have your local dev environment use .env and keep all references to prod environment variables completely separated and stored wherever you are deploying. makes sense, but i hadn't had to work with prod environment variables before on a separate DB instance prior to this project.

The client

I had never manipulated the DOM (document object model) by hand before and boy oh boy does that process suck! We'll get into it. This took me way back to senior year of college, when I first took an intro to HTML/CSS course as a business major. It was honestly so satisfying going back to such homely roots writing basic HTML in a .html. Speaking of which, it all starts with index.html. No really it does.

Serving a file

  • index.html is the entry point to our client side application. Whether you are working in raw JS, like I am here, or a framework, index.html serves as the starting point. That's because web servers are configured by default to serve a file named index.html when a directory is requested without specifying a filename. Since I was using Node.js by itself I was effectively following convention by using an index.html file. I'm checking for that file in this snippet:
let filePath;
if (req.url.match(/^\/heroes\/\d+$/)) {
    filePath = path.join(webDirectory, 'hero/views/show.html');
} else if (req.url === '/') {
    filePath = path.join(webDirectory, 'index.html');
} else {
    filePath = path.join(webDirectory, req.url);
}

to then serve to the browser. If I was using something like Vite or Webpack I would have gotten this handling for free.

Frameworks are configured by default to point at index.html, but that is just convention. You could configure your app to use any file you wanted as the entry point to your client app. For me, I find that it's all too easy to forget this sort of detail when you spend so much time working in the context of a framework like Vue.js or React.

How do I load JavaScript into an HTML file again?

  • Once I had a basic shell in my index.html file, with my static html, it was time to relearn how to import JS files into there. Similar to our index.html there is convention to use a index.js file for loading in all of the project's JS. With frameworks, you guessed it, you get this behavior for free! The head tag is where we put all our metadata to be loaded onto the page. this includes our link,title, and script tags. I won't get into exhaustive detail, but the script tag is how we link our JS files to our HTML. The first time I saw this pattern many years ago, I didn't think much of it. That's just how that works. This time, I was struck with oh god, this won't scale well unless I'm very intentional. Luckily, my instincts for how to organize code kicked in and I immediately saw index.js as being my pseudo App.js file. With index.js going into index.html, I recognized I organize all my resource specific scripts into their own child index.js file and import each of those index.js files into our parent web-svc/index.js like so:
web-svc/hero/main.js
import { handleCreateHeroButton } from './scripts/create.js';
import { handleUpdateHeroDetailsButton, handleCancelUpdateHeroDetailsButton, handleBrowseEquipmentButton, handleEquipmentSelectionCancelButton } from './scripts/show.js';
import { handleEquipmentSelectionSubmitButton } from './equipment/scripts/submit.js';
import { handleDeleteHeroButton } from './scripts/delete.js';
import { getHeroes } from './scripts/index.js';
import { showHero } from './scripts/show.js';

const initHero = () => {
    handleCreateHeroButton();
    handleUpdateHeroDetailsButton();
    handleCancelUpdateHeroDetailsButton();
    handleBrowseEquipmentButton();
    handleEquipmentSelectionCancelButton();
    handleEquipmentSelectionSubmitButton();
    getHeroes();
    showHero();
    handleDeleteHeroButton();
}

export { initHero };

...
web-svc/index.js
import { initHero } from './hero/main.js';
import { initEquipment } from './equipment/main.js';

initHero();
initEquipment();

As you can see, I settled on using main.js over index.js. Initially I was using index.js for heroes and equipment, with a show-all.js for my list view on each. That felt bad because I know index is typically a reserved word for list views, so I ultimately refactored to use main.js for my equipment and hero resources and renamed my show-all.js into index.js.

A couple bullets on the value of code organization

  • Thinking about how to organize code is my favorite thing about programming. You need syntax to write programs, yes, but the real value in being a skilled developer comes from seeing how to architect software so it doesn't become miserable to work in. As I go further into my career this is the skill I most want to strengthen. With AI now on the scene, continuing to get more skilled at architecture considerations will prove to be all the more important as the barrier to writing slop continues to fall.

  • For this project, given that I don't have components, I opted for separating my JS files by CRUD operation. We already got a dose of pain in the server, now I was getting a dose on the client. See without structured paradigms for code organization that include built-in state management, modularity, and component reusability -- i.e. all things a frontend framework gives you, I could feel just how brittle my code was. It made me think, When you remove abstraction or opinionated APIs, you have that much more rope to hang yourself with and that much less horsepower to build things with. Working in raw JS makes it so you have to write far more code to go hardly anywhere in comparison to using a framework AND you have to be that much more disciplined with how you organize things to not get completely lost in the web of interacting with the DOM directly. I back-doored my way into understanding the importance of state management through the experience of not having it, which became evident when multiple users were simultaneously using the app in production.

A failure to consider concurrent users

  • With just me testing locally, I got the false impression that what I was building would work in production, at least for a handful of users. However, when I got on Discord and told my friends to play with what I had made, I noticed the UI was getting out of sync almost immediately. One user was seeing equipment attached to his hero while I was seeing no equipment associated with that same hero on my end. When I looked at my EC2 instance from the CLI to see what the database was showing and everything looked right. If the issue isn't with the database, it must be with the client? But I tested the functionality thoroughly on my machine over and over. What gives? Then it hit me, I'm adding and removing HTML from the DOM directly as users make requests to the backend. There is no single source of truth in the client layer of my app for tracking state as changes get made. This was really cool. I was experiencing first-hand why state management exists through the discovery of what happens when you don't have it.

key learnings:

  1. as you remove abstraction, the weaknesses of your approach are laid bare.
  2. interacting with the DOM directly is very tedious, don't recommend.
  3. if the project got any larger, i'd have to start creating controllers by hand.

Deployment to Production

Deploying to production proved to be relatively straightforward. I already had AWS CLI configured and had previously made a console user with full permissions. All I had to figure out was 1. standing up an EC2 instance and 2. setting up RDS to serve as my database.

I started by launching an EC2 instance - basically your own virtual Linux server in the cloud. We created a secure key pair to access it safely and set up the firewall (security groups) to allow the app to communicate with the world.

Once we had access to the server, we installed Node.js, pulled our code from GitHub, and set up PM2 - a process manager that keeps your app running smoothly and restarts it if it crashes. We created a production environment file with all the right settings for the cloud environment and made a small but important tweak to your database connection to work with AWS RDS.

For the database, I set up a PostgreSQL instance on Amazon's RDS service. It's basically a managed database server without all the maintenance headaches. We connected it to your EC2 instance, ran your SQL schema file to create the tables, and added SSL configuration to make the database connection secure.

We also made sure the app would survive server restarts by setting up PM2 to launch automatically, and we configured detailed logging to troubleshoot any issues that might come up.

The last step was making sure all the network settings were correct so the EC2 server could talk to the RDS database, and users could access the application through their browsers.

And that was it! Our full-stack Dark Souls CRUD Arena app is now running in the cloud at this IP address with a proper server and database setup.