Reviving My Kodak Portrait 3D Printer with Raspberry Pi Solutions

Years ago I bought a Kodak Portrait 3D printer. It is a great printer, very solid, mechanically excellent, with two nozzles, and it can print ABS. The problem is, very shortly after I bought the printer it became orphaned by the company that built it. The whole system was mainly cloud based. You used a custom version of Cura that uploaded G-code files to the cloud, then you logged into their website to initiate printing and could watch the job using its built-in webcam. That worked for a while until it didn’t. Their version of Cura is now hopelessly out of date.

So step number one was to figure out how to get a modern Cura to slice for the machine. It wasn’t easy, but I eventually wrote a script that Cura applies after slicing to patch up the G-code.

The next problem was that the only way to print a job was to put it onto a USB stick and print from the stick. That is very inconvenient. You have to walk over to the printer, grab the stick, go back to your computer, put the stick in, write the file, eject the stick, pull it out, and walk over to the printer again.

I wanted something that acted like a USB drive but could take files over Wi-Fi. I couldn’t find one, so I made one using a Raspberry Pi Zero 2 W. Several Raspberry Pi models have what is called Gadget Mode. Instead of acting as a general-purpose computer with USB host ports, you can flip it around so the USB port behaves like a device when plugged into some other computer’s host port. There are several device types it can emulate: a network adapter, a USB MIDI device, a keyboard or mouse, a game controller, and a USB drive.

By putting a Zero into gadget mode as a USB drive, I created a small volume on the Zero’s system SD card to hold the contents of this virtual USB stick. Any files placed on that volume show up on the host computer as if they were on an inserted USB drive. There is some trickiness involved, because the host computer (in this case the 3D printer’s internal Raspberry Pi controller) only scans the directory on insertion. So I wrote a script that detaches the gadget mode drive, mounts the volume on the Zero, writes a file to it, unmounts, and reattaches the gadget mode drive. From the 3D printer’s point of view, the USB stick was removed and reinserted, so it sees the new file.

Next, I used Samba and a directory-watcher Python script that calls the file-transfer script whenever a file is dropped into an inbox directory.

Now, from Cura, I can use “Save to File”, select the Pi’s network share, drop the file in, and as if by magic, the file shows up on my printer’s UI when I walk over to start the print.

Note: I had help from AI to write the code and to suggest edits to this blog post.

Responsible use of AI, vs AI Slop

I’m impressed by how many posts in my Facebook feed these days appear to be AI generated, including both the text and accompanying photos. Many are clearly pushing falsehoods, and some attribute things to political figures and celebrities who never said such things.

I admit, I use an AI LLM (large language model) to help write posts, but my process is to completely write the post myself based on the truth as I know it, then I run it past AI and ask for a fact check on what I’ve said to keep me honest. A lot of times the LLM wants to rewrite the post, supposedly to improve the grammar and flow. I draw the line there. I may edit my post with some of its suggestions, especially grammar and facts, but I won’t let AI write for me.

AI-generated narrative text often has telltale signs once you’ve read enough of it, but even experienced readers can’t always be sure. Detection tools can help, though they’re not perfect. Clearly social media platforms can use their own AI to detect synthetic content. I stop short of saying platforms should ban it—because that risks a slippery slope toward limiting free speech—but they could at least flag posts with an “AI probability” rating to promote transparency.

On YouTube it’s a different story. Many channels are using AI narrators and AI photos within their videos. In most of those cases, I see it as a practical thing to do. People want to get their ideas out there, they write their own script, but they may not have a good speaking voice, good audio equipment, a budget to pay for voiceovers and stock photos, or even enough command of English to narrate their own videos. YouTube now asks creators to disclose when realistic content is AI-generated, which seems like a fair and responsible approach.

I ran this post past an LLM and let it fold in factual corrections—I left in a telltale sign AI touched the post, can you find it?