Author Archives: moozzyk

Running Puppeteer in a Docker container on Raspberry Pi

Puppeteer is a Node.js module that allows interacting with a (headless) web browser programmatically. This is extremely useful for automating website testing, generating screenshots and PDFs of web pages or programmatic form submission.

Docker offers numerous benefits including a standardized environment, isolation, and rapid deployment to name a few. These benefits might be why you’d want to run Puppeteer inside a Docker container. Unfortunately, doing so on Raspberry Pi is not straightforward. There are a few issues that make it harder than usual. Luckily, they are all solvable. Let’s take a look.

Problem 1: Chromium included in Puppeteer does not work on Raspberry Pi

Puppeteer by default downloads a matching version of Chromium which is guaranteed to work out of the box on supported platforms. Unfortunately, Chromium does not currently provide an arm build that works on Raspberry Pi and running stock Puppeteer on Raspberry Pi will end up with a crash. This can be solved by installing Chromium with apt-get install chromium -y and telling Puppeteer to use it by passing  the executablePath: '/usr/bin/chromium' to the launch() function as follows:

   const browser = await puppeteer.launch({
        executablePath: '/usr/bin/chromium',
        args: []
    });

When doing this, Puppeteer no longer should need to download Chromium as it will be using the version installed with apt-get, so it makes sense to skipping this step by setting the corresponding environment variable in the Dockerfile:

ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true

This will significantly reduce the time needed to install node modules.

Problem 2: The base image installs an old version of Chromium

Some node Docker images are based on old distributions that contain only older versions of Chromium. Most notably the node:16 image is based on buster. If you use this base image the only version of Chromium you will be able to install with apt-get is 90.0.4430.212-1. Unfortunately this version doesn’t work in a Docker container – it just hangs indefinitely. Moving to the node:16-bullseye base image allows installing a much newer version of Chromium (108.0.5359.124) where this is no longer a problem.

Problem 3: Puppeteer crashes on launch

Puppeteer will not launch in a Docker container without additional configuration. Chromium is not able to provide sandboxing when running inside a container so it needs to be launched at least with the --no-sandbox argument. Otherwise it will crash with the following error message

Failed to move to new namespace: PID namespaces supported, Network namespace 
 supported, but failed: errno = Operation not permitted

Sandbox is a security feature and running without a sandbox is generally discouraged. Unfortunately, running without a sandbox appears to be currently the only way to run Puppeteer inside a Docker container. In the past the --no-sandbox option required running Puppeteer as root, only increasing the risk. Luckily, this no longer seems to be the case – it is possible now to launch puppeteer with the --no-sandbox option as a non-privileged user.

There are a few more options that might be worth exploring if launching Puppeteer inside a container fails:

  • --disable-gpu – disables GPU hardware acceleration (which is usually not available when running in Docker)
  • --disable-dev-shm-usage – prevents from using shared RAM (/dev/shm/)
  • --disable-setuid-sandbox – disabled setui sandbox

Putting everything together

The information provided above should be all that is needed to be build a Docker image for a Node.js app that uses Puppeteer and runs on Raspberry Pi. Below is an example Dockerfile for such a Docker image. It contains comments to make it easy to notice how the solutions discussed above were applied.

# Ensure an up-to-date version of Chromium 
# can be installed (solves Problem 2)
FROM node:16-bullseye 
# Install a working version of Chromium (solves Problem 1)
RUN apt-get update
RUN apt-get install chromium -y
ENV HOME=/home/app-user
RUN useradd -m -d $HOME -s /bin/bash app-user 
RUN mkdir -p $HOME/app 
WORKDIR $HOME/app
COPY package*.json ./
COPY index.js ./
RUN chown -R app-user:app-user $HOME
# Run the container as a non-privileged user (discussed in Problem 3)
USER app-user
# Make `npm install` faster by skipping 
# downloading default Chromium (discussed in Problem 1)
ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true
RUN npm install
CMD [ "node", "index.js" ]

Because the application also requires a couple modifications to how the headless browser is launched here is a small example application illustrating these changes with comments:

const puppeteer = require('puppeteer');
(async() => {
    const browser = await puppeteer.launch({
        // use Chromium installed with `apt` (solves Problem 1)
        executablePath: '/usr/bin/chromium',
        args: [
            // run without sandbox (solves Problem 3)
            '--no-sandbox',
            // other launch flags (discussed in Problem 3)
            // '--disable-gpu,
            // '--disable-dev-shm-usage',
            // '--disable-setuid-sandbox',
        ]
    });
    const page = await browser.newPage();
    await page.goto('https://www.google.com/', {waitUntil: 'networkidle2'});
    let e = await page.$('div#hplogo');
    let p = await e?.getProperty('title');
    if (p) {
      console.log(`Today's doodle: ${await p.jsonValue()}`);
    } else {
      console.log('No Doodle today :(');
    }
    browser.close();
})();

Finally, here is the output of this application when run in a container:

Both the application and the Dockerfile are also available on Github

Conclusion

Running Puppeteer inside a Docker container is tricky – especially, when doing so on Raspberry Pi. The post discussed the key obstacles and provided solutions to overcome them. In addition, a demo containerized app was included to illustrate the main points.

Advertisement

Troubleshooting permission issues when building Docker containers

Docker containers run by default as root. In general, it is not a recommended practice as it poses a serious security risk. This risk can be mitigated by configuring a non-root user to run the container. One of the ways to achieve this is to use the USER instruction in the Dockerfile. While running a container as a non-root user is the right thing to do, it can often be problematic as insufficient permissions can lead to hard to diagnose errors. This post uses an example node application to discus a few permission-related issues that can pop up when building a non-root container along with some strategies that can help troubleshoot this kind of issues.

Example

Let’s start with a very simple Dockerfile for a node application:

FROM node:16
WORKDIR /usr/app
COPY . .
RUN npm install
CMD [ "node", "/usr/app/index.js" ]

The problem with this Docker file is that any Docker container created based on this file will run as root:

To fix that we can modify the Docker file to create a new user (let’s call it app-user) and move the application to a sub-directory in the user home directory like this:

FROM node:16
ENV HOME=/home/app-user
RUN useradd -m -d $HOME -s /bin/bash app-user
RUN chown -R app-user:app-user $HOME
USER app-user
WORKDIR $HOME/app
COPY . .
RUN npm install
CMD [ "node", "index.js" ] 

Unfortunately, introducing these changes makes it impossible to build a docker image – npm install now errors out due to insufficient permissions:

Step 8/9 : RUN npm install
 ---> Running in a0800340b850
npm ERR! code EACCES
npm ERR! syscall mkdir
npm ERR! path /home/app-user/app/node_modules
npm ERR! errno -13
npm ERR! Error: EACCES: permission denied, mkdir '/home/app-user/app/node_modules'
npm ERR!  [Error: EACCES: permission denied, mkdir '/home/app-user/app/node_modules'] {
npm ERR!   errno: -13,
npm ERR!   code: 'EACCES',
npm ERR!   syscall: 'mkdir',
npm ERR!   path: '/home/app-user/app/node_modules'
npm ERR! }
...

Inspecting the app directory shows that the owner of this directory is root and other users don’t haver the write permission:

app-user@d0b48aa18141:~$ ls -l ~
total 4
drwxr-xr-x 1 root root 4096 Jan 15 05:48 app

The error is related to using the WORKDIR instruction to set the working directory to $HOME/app. It’s not a problem by itself – it’s actually recommended to use WORKDIR to set the working directory. The problem is that because the directory didn’t exist, WORKDIR created one, but it made root the owner. The issue can be easily fixed by explicitly creating a working directory with the right permissions before the WORKDIR instruction runs to prevent WORKDIR from creating the directory. The new Dockerfile that contains this fix looks as follows:

FROM node:16
ENV HOME=/home/app-user
RUN useradd -m -d $HOME -s /bin/bash app-user
RUN mkdir -p $HOME/app
RUN chown -R app-user:app-user $HOME
USER app-user
WORKDIR $HOME/app
COPY . .
RUN npm install
CMD [ "node", "index.js" ]

Unfortunately, this doesn’t seem to be enough. Building the image still fails due to a different permission issue:

Step 10/11 : RUN npm install
 ---> Running in 860132289a60
npm ERR! code EACCES
npm ERR! syscall open
npm ERR! path /home/app-user/app/package-lock.json
npm ERR! errno -13
npm ERR! Error: EACCES: permission denied, open '/home/app-user/app/package-lock.json'
npm ERR!  [Error: EACCES: permission denied, open '/home/app-user/app/package-lock.json'] {
npm ERR!   errno: -13,
npm ERR!   code: 'EACCES',
npm ERR!   syscall: 'open',
npm ERR!   path: '/home/app-user/app/package-lock.json'
npm ERR! }
...

The error message indicates that this time the problem is that npm install cannot access the package-lock.json file. Listing the files shows again that all copied files are owned by root and other users don’t have the write permission:

ls -l
total 12
-rw-r--r-- 1 root root  71 Jan 15 02:03 index.js
-rw-r--r-- 1 root root 849 Jan 15 01:36 package-lock.json
-rw-r--r-- 1 root root 266 Jan 15 05:21 package.json

Apparently, the COPY instruction by default uses root privileges so, the files will be owned by root even if the COPY instruction appears the USER instruction. An easy fix is to change the Dockerfile to copy the files before configuring file ownership (alternatively, it is possible specify a different owner for the copied files with the --chown switch):

FROM node:16
ENV HOME=/home/app-user
RUN useradd -m -d $HOME -s /bin/bash app-user
RUN mkdir -p $HOME/app
COPY . .
RUN chown -R app-user:app-user $HOME
USER app-user
WORKDIR $HOME/app
RUN npm install
CMD [ "node", "index.js" ]

Annoyingly, this still doesn’t work – we get yet another permission error:

Step 9/10 : RUN npm install
 ---> Running in d4ebcec114cb
npm ERR! code EACCES
npm ERR! syscall mkdir
npm ERR! path /node_modules
npm ERR! errno -13
npm ERR! Error: EACCES: permission denied, mkdir '/node_modules'
npm ERR!  [Error: EACCES: permission denied, mkdir '/node_modules'] {
npm ERR!   errno: -13,
npm ERR!   code: 'EACCES',
npm ERR!   syscall: 'mkdir',
npm ERR!   path: '/node_modules'
npm ERR! }
...

This time the error indicates that npm install tried creating the node_modules directory directly in the root directory. This is unexpected as the WORKDIR instruction was supposed to set the default directory to the app directory inside the newly created user home directory. The problem is that the last fix was not completely correct. Before, COPY was executed after WORKDIR so it copied the files to the expected location. The fix moved the COPY instruction so that it is now executed before the WORKDIR instruction. This resulted in copying the application files to the container’s root directory, which is incorrect. Preserving the relative order of these two instructions should fix the error:

FROM node:16
ENV HOME=/home/app-user
RUN useradd -m -d $HOME -s /bin/bash app-user
RUN mkdir -p $HOME/app
WORKDIR $HOME/app
COPY . .
RUN chown -R app-user:app-user $HOME
USER app-user
RUN npm install
CMD [ "node", "index.js" ]

Indeed, building an image with this Dockerfile finally yields:

Successfully built b36ac6c948d3

Yay!

The application also runs as expected:

Debugging strategies

Reading about someone’s errors is one thing, figuring the errors out oneself is another. Below are a few debugging strategies I used to understand the errors described in the first part of the post. Even though I mention them in the context of permission errors they can be applied in a much broader set of scenarios.

Carefully read error messages

All error messages we looked at were very similar, yet each signaled a different problem. While the errors didn’t point directly to the root cause, the small hints were very helpful in understanding where to look to investigate the problem.

Check Docker documentation

Sometimes our assumptions about how the given instruction runs may not be correct. Docker documentation is the best place to verify these assumptions and understand if the wrong assumptions could be the culprit (e.g. the incorrect assumption that the COPY will make the current user the owner of the copied files).

Add additional debug info to Dockerfile

Sometimes it is helpful to print additional debug information when building a docker image. Some commands I used were:

  • RUN ls -al
  • RUN pwd
  • RUN whoami

They allowed me understand the state the container was in at a given time. One caveat is that by default docker caches intermediate steps when building containers which may result in not printing the debug information when re-building a container if no changes were made as the step was cached.

Run the failing command manually and/or inspect the container

This is the ultimate debugging strategy – manually reproduce the error and inspect the container state. One way to make it work is to comment out all the steps starting from the failing one and then build the image. Once the image is build start a container like this (replace IMAGE with the image id):

docker run -d IMAGE tail -f /dev/null

This will start the container whose state is just as it was before the failing step was executed. The command will also keep the container running which makes it possible for you to launch bash within the container (replace CONTAINER with the container id returned by the previous command):

docker exec -it CONTAINER /bin/bash

Once inside the container you can run the command that was failing (e.g. npm install). Since the container is in the same state it was when it failed to build you should be able to reproduce the failure. You can also easily check for the factors that caused the failure.

Conclusion

This post showed how to create a docker container that is not running as root and discussed a few permission issues encountered in the process. It also described a few debugging strategies that can help troubleshoot a wide range of issues – including issues related to permissions. The code for this post is available on github in my docker-permissions repo.

Craigslist automation

Update: this project is now available on npm: https://www.npmjs.com/package/craigslist-automation

Long time ago Craigslist allowed accessing their post via RSS. It was possible to append &format=rss to the Craigslist’s URL query string to get a programmatic access to posts. Unfortunately, Craigslist stopped supporting RSS a few years ago and it does not seem like it (or a replacement) is going to be available anytime soon, if ever. With RSS gone, the community stepped up and created python-craigslist  – a Python package that allows accessing Craigslist posts from a Python program. I remember experimenting with it some time ago and it worked pretty well. I tried it again last night and to my surprise I couldn’t get any results for my queries. I checked the project’s repo, and I quickly found an issue that looked exactly like mine. The issue points out that the HTML that Craigslist returns no longer contains posts but a message mentioning that to see the page a browser with JavaScript support is required. This breaks the python-craigslist library as it just sends HTTP requests and simply parses the returned HTML. It seems, Craigslist no longer serves results as plain old HTML but is using JavaScript to build the post gallery dynamically. Not being a web developer, it surprised me to see the same behavior when using a browser – out of curiosity I loaded the “cars+trucks” for sale post gallery, checked the page source, and saw the same message as mentioned in the GitHub issue. However, after inspecting the DOM with the built-in developer tools, I could see individual posts.  

For my experiment, the python-craigslist was an option anymore and I needed a different solution. I spend a few minutes looking at network request Craigslist was sending, and it was clear that making sense out of it would require a lot of effort. What I wanted was something that can act the same way as a browser only can be driven programmatically.  

Enter the headless browser 

When I described what I wanted, I realized this was an exact definition of a headless browser – a browser that can run without a graphical user interface. I knew Chrome could run in the headless mode and could be controlled from a Node.js project as I had played with it a few years earlier. Because it had been a while, I wanted to check how people do this these days. Sure enough, I quickly found puppeteer – a Node.js library that allows interacting with headless Chrome. I quickly created a new Node.js project, configured it to use TypeScript and voila – with a few lines of code:

import * as puppeteer from "puppeteer";
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto(
"https://seattle.craigslist.org/search/cta?query=blazer%20k5",
{
waitUntil: "networkidle0",
}
);
let elements = await page.$$("a.post-title");
console.log(elements.length);
await Promise.all(
elements.map(async (e) => {
let href = await e.getProperty("href");
console.log(await href.jsonValue());
})
);
await browser.close();
})();

I was able to get links to listings from my query:

Obviously this is only a simple prototype but could be useful to conduct simple experiments.

Tagged , , ,

Cloud Enabled Commodore 64: Part V – Do It Yourself

By now many people saw the demo of the Cloud Enabled Commodore 64 project, read posts discussing implementation and the retrospective and some commented that they would like to try it out themselves. This post describes how to do that.

We will start from listing required hardware and then will move to the required software.

Hardware

There are two hardware options to try the project out – you can either use an emulator or use a real Commodore 64. The emulator route is a bit easier as it does not require a working Commodore 64 and additional peripherals. You will still need a Node MCU board like this:

Node MCU Board
Node MCU board

which you can get on ebay for below $5. If you decide to try it out on a real Commodore 64 you will need a C64 WiFi modem. Make sure it is using the NodeMCU module and that the module is accessible. This is how mine looks like:

C-64 WiFi Modem

For the real C-64, you will also need to be able to load the cross-compiled program to your C-64. There are a few possibilities here – I used an SD2IEC floppy drive emulator, and it worked great for my needs.

Software

Before moving to software I would like to start with a disclaimer. I did all the work on MacOS. I will try my best to provide instructions for Windows and Linux, but they might be lacking.

Git

You will need git to clone the project repo. It is very likely that you already have git installed on your machine but if not follow instructions from here: https://git-scm.com/book/en/v2/Getting-Started-Installing-Git

make

You will need make to build the project and the cc65 toolchain. On MacOS you can get it by installing Apple developer tools. On Linux, you likely already have it. On Windows you would need to install either Cygwin or use WSL (Windows Subsystem for Linux).

cc65

cc65 link is a “cross development package for 6502 systems”. To get it, follow instructions listed on https://cc65.github.io/getting-started.html. Please make sure the tools can be resolved (e.g. run sudo make avail or add them to the path). You can test your installation by running cc65 from command line and verifying that it printed cc65: No input files.

Arduino IDE

We will need to update the NodeMCU board for which we will use the Arduino IDE. It can be downloaded from https://www.arduino.cc/en/software.

VICE emulator

If you are going the emulator route (which I recommend even if you eventually want to use the real C-64) you will need the VICE emulator which you can download from: https://vice-emu.sourceforge.io/index.html#download

Dotnet

You will also need the .NET SDK. It will be used to run the server locally. This makes it easier to test and troubleshoot, if necessary. It is will also be needed if you decide to publish the server to Azure. You can get the .NET SDK from https://dotnet.microsoft.com/en-us/download

Node and npm
The server contains a web client which depends on a few node packages (most notably the @microsoft/signalr package) so you will need npm to install these packages.

Preparing and running the application

With all pre-requisites installed we can get down to business and try to start the application. Here are the steps:

  1. Clone the project repo
    Run git clone https://github.com/moozzyk/SignalR-C64

  2. Start the TestServer
    The test server is the chat server our chat application will be talking to. Note, that the server registers an https endpoint which on will use a developer TLS certificate when running locally. This may result in showing a warning or asking to register the certificate (which you can do by executing dotnet dev-certs https --trust). If you don’t want to see the warning you can remove the https url from this line and just use HTTP. This will work fine for local runs but is not recommended (perhaps not event possible) when running the server on Azure. You will also want to make sure that the server is accessible from outside of your machine (i.e. make sure that other devices on your network can access the application).
    To start the TestServer you need to go (cd) to the TestServer directory and run:
    npm install
    dotnet run

  3. Verify that the test server works
    Connect to the server using a browser. Ideally you would want to connect using an external IP or the name of your machine (i.e. avoid 127.0.0.1 or localhost) or use another device connected to the same network. Once connected try to send a message – if you receive the message you typed, the server is set up correctly. (Note that if you try connecting to the server with HTTPS you may see warnings caused by using the local (dev) TLS certificate.)


  1. Backup the C64 WiFi Modem firmware (optional)
    If you are using the C64 WiFi Modem for this project you may want to back the currently installed firmware up as the next step will overwrite the firmware effectively removing the original functionality provided with the modem (i.e. connect to BBSes). One way to do this is to use the esptool to download the existing firmware and then upload it later to bring back the original functionality. You can install the esptool by running:
    pip install esptool

    To download the firmware connect the modem to your computer and run (remember to update the port to point to your serial device):
    esptool.py --baud 115200 --port /dev/cu.usbserial-1420 read_flash 0x0 0x400000 ~/tmp/C64WiFi-backup-4M.bin

    To upload the firmware back to the board run:
    esptool.py --baud 115200 --port /dev/cu.usbserial-1420 write_flash 0x00000 ~/tmp/C64WiFi-backup-4M.bin

    You can use screen (or Putty on Windows) to test that the firmware has been uploaded correctly. First run:
    screen screen /dev/cu.usbserial-1420 300
    and then type AT? You should see something like this:
    AT?
         cOMMODORE4EVER V2.3 wIFI mODEM
    ...

  1. Upload the firmware to the NodeMCU board
    – Start Arduino IDE and open the EspWs.ino sketch and set the default credentials on this line. (For simplicity, the code running on C-64 does not allow setting credentials – it assumes that the credentials are properly configured and will just initiate WiFi connection.)
    – If you are planing to use real C-64 set the transfer speed to 600 bauds here. Leave 1200 if using the emulator as Vice-64 does not seem to support 600 bauds.
    – Connect your NodeMCU board (or the C64 WiFi Modem) to your computer.
    – Make sure to select the NodeMCU 1.0 (ESP-12E Module) board (If you can’t see this board you may need to add it first using the Board Manager Tools -> Board Manager, search for “esp8266” and then install)

– Select the device

– Upload the firmware to the board

  1. Verify firmware was deployed successfully
    Go to the EspWs directory and run the following command (make sure to provide correct values for the server, device and transfer rate):
    python3 prototype.py 192.168.86.250:5000 /dev/cu.usbserial-1420 1200

    If the firmware has been uploaded correctly you should see the following output:
    b'\x03\x00'
    OK
    b'\x05\x1dws://192.168.86.250:5000/chat'
    WS
    b'Connected'
    b'\x06*{"protocol": "messagepack", "version": 1}\x1e'
    OK
    DATA
    b'{}\x1e'
    DATA
    b'\x02\x91\x06'
    DATA
    b'\x02\x91\x06'

    Note there will be some delay before you will be able to see most of the output as it takes about 10 seconds for the board to connect to WiFi. Another, important thing is that if you stop the script and want to try again, you’ll need to reset the board (press the RST button on the NodeMCU module and wait a few seconds before trying again).
  1. Configure Vice
    This step is only needed if you want to run the app using the Vice emulator.
    Open Vice and go to Settings -> Peripheral devices -> RS232
    Make sure to “Enable Userport RS232 Emulation” and select the device that you want to use. In the RS232 devices you need to provide the device filename and the transfer speed. For the emulator you want to 1200 bauds. Here is how this is configured in my case:

  1. Configure chat server URL
    You will need to set the correct URL to be able to connect to your chat server by modifying the value here.
  2. Build and run
    The application should now be ready to run. Go to the App directory where you will be able to build the application with make. The makefile supports a few targets. The default target (i.e. running make without any arguments) will compile the app to a.prg file. make clean will delete temporary files. make d64 creates a .d64 (disk image) file you can either to attach to the emulator or use to run on a real C-64 (e.g. using SD2IEC). The fastest way to build and run the app on the emulator is to invoke the following command:
    make clean && make && x64sc --autoload signalrdemo.prg
    It will clean temporary files, create a prg file, start the emulator and automatically load the prg. Then you can just type run in the emulator to run the app.
  3. Deploy the server to Azure (or a cloud provider of your choice)
    If the application is working correctly in the local environment you can deploy the server to a cloud provider. You will need to update the URL accordingly and compile with the new settings (steps 8 and 9).

This post concludes the Cloud Enabled Commodore-64 mini series. I hope this project brought back some good memories for you as it did for me.

Cloud Enabled Commodore 64: Part IV – Retrospective

In the previous post we looked at implementation details of the Cloud enabled Commodore 64. In this post I would like to sum up the project and retrospect of what I might have done differently.

In general, I am extremely happy about how the project turned out. First and foremost – it does work! I was able to put all technologies together and line all the stars up to the point where an almost 40-year-old computer is talking to cloud and thus can connect to other – more modern – devices.

I also have to admit that I am amazed how great the Vice emulator is. I was not confident that it would be possible to implement the project end-to-end on an emulator and – even if it were – that the code would run on the actual hardware without any additional debugging.

Having said that after looking back at this project I found a few things I might have done differently.

Time management

I have not put any timelines on this project. I worked on it only on and off – when the time allowed, and when I felt like it. As a result, it took almost a year to drive it to completion. This was only a hobby project so it is not a big deal, but I feel that if I were more focused, I could have finished it in half of that time.

Use the C language instead of assembly

When I embarked on this project, I decided to solely use assembly for the Commodore 64 code. Halfway through I looked more at what the cc65 had to offer and pondered moving to C. There were a few downsides to this approach like the ramp up time or having to figure out how to mix assembly and C especially in the context of interrupt handling. I also was concerned about the size of the binary produced by the compiler. This was probably unreasonable as I did not include any artifacts like graphics or music. Switching to the C language could increase my productivity in the long run, reduce the number of hacks I implemented (especially towards the end of the project) and make the code more accessible.

Run the SignalR client on the board instead of on the C-64

One of the principles of the project was to use the NodeMCU/ESP8266 board only as a simple network card and have the C64 run everything else. I felt like pushing more logic to the NodeMCU board would be “cheating”. Relying more on NodeMCU/ESP8266 could have a couple advantages:

  • having to write much less 6502 assembly and, as a result, potentially finishing the project faster
  • a general use SignalR client in C oriented towards embedded systems

Run serial communication at more than 1200 bauds

To simplify the project, I decided to run communication between C64 and the C64 WiFi modem at 1200 bauds. This is really, really slow. If “1200 bauds” does not tell you much: this is roughly 120 bytes per second. C64 can potentially support up to 2400 bauds. The C64 WiFi modem supports speeds up to 9600 bauds. To run at these speeds, I would have to use special routines for handling serial traffic. This would bring the transfer to almost 1KB/s. To be honest, given today’s transfer speeds, I don’t think achieving 9600 bauds would make any difference. This project does not and will never have any practical use, so 1200 bauds is probably as good as 9600 bauds. We also only have less than 60 KB of memory to fill (after disabling all ROMs).

Addendum

There is a quite hilarious twist to the paragraph above. I wrote it after briefly testing the project on the actual hardware but before making the video I posted on youtube. When I was shooting the video I started seeing garbage on the screen. It started innocently – apparently the exclamation point was not shown properly. I immediately knew something bad was happening because I did have a dedicated code to handle punctuation marks. Nevertheless, I ignored the error hoping it won’t get worse. Unfortunately things went south really quickly – see for yourself:

I took a break to debug the issue and was only able to reproduce it on the actual Commodore 64 but never on the emulator. I concluded that I was hitting one of the bugs in the KERNAL code handling serial communication which would result in occasionally flipping bits. Indeed, after lowering the serial connection transfer speed from 1200 bauds to 600 bauds the problem disappeared. So, for the demo, I ended up with 600 baud for the real C-64 and 1200 baud for the emulator as the emulator did not have any issues at “higher” speeds.

These are my biggest take aways from this project. In the next post I am planning to provide steps to anyone interested in trying this project out on their own.