Blog

Tale of Two Graphics APIs

November 17th, 2023 (permalink)

Once upon a time, there were two major graphics APIs. Or API families, but let's keep things simple and just call them that. On one side, we have Khronos and the OpenGL camp (including all sorts of Open-something APIs and their variants as well as Vulkan, but for simplicity's sake we'll call this side OpenGL), and on the other side we have Microsoft and DirectX. Also for simplicity's sake I'll generalize a lot and probably outright lie too, so caveat emptor and all that.

There's a principal difference in development philosophy between the two camps. OpenGL advances through extensions. Different stakeholders (mostly independent hardware vendors, or IHVs) agree or disagree on a feature, draft up an extension, which is then implemented in some or all vendor's graphics drivers, to be usable by software. Sometimes one vendor goes alone and we get a single-vendor extension (which sometimes gets implemented by the others), sometimes a few, but not all, work together, and sometimes everybody agrees.

More often than not the most popular extensions turn into commonly agreed upon ones, either with or without amendments. That means that even though it looks like there's a massive amount of extensions, a single extension may exist (in worst case) in single-vendor experimental form (for instance, AMDX), single-vendor extension (for instance, NV), multi-vendor extension (EXT) and standard version (KHR).

The extensions also get their own set of tests in the conformance suite, a huge pile of tests your drivers have to pass in order to be compliant. If the extension documentation isn't clear enough, just look at what's being tested.

The good side of all of this is that if your new graphics hardware supports some fun feature you can get it into the hands of software developers relatively quickly. The bad side is that bleeding edge software tends to need several code paths, based on various extensions. This isn't so much of an issue these days (but still, kinda, is), but as far as I recall, some iteration of Quake had an unreasonable number of rendering code paths depending on what set of extensions was available.

New OpenGL versions are defined by saying that okay, from this version on, this bunch of extensions is part of the core. This also means that extensions must be able to remove features, which is fun. Okay, sure, there's been a few attempts at clean breaks, but the lineage is still there. Also, it seems that every time people sit together and decide that let's kill the awkward feature X once and for all, someone will figure out, eventually, that we need the awkward feature X after all, and it has to be hacked back in after the fact.

All this means that if you're writing an OpenGL driver from scratch, you could say that okay, I'm only supporting version 3.14 onwards, but in practice you'll want to implement all the old (and in some cases, really obscure) features too to support old software.

DirectX doesn't have that problem, as there's discrete versions, and Microsoft (more or less) starts each new version from scratch. Porting software between versions is possible, but takes (depending on which versions you're hopping between) about as much work as hopping between camps. Some versions of DirectX are closer to each other than others. There's also no extension mechanism: Microsoft dictates what's in there and that's that. (Hardware can still have different capabilities while supporting the same DirectX version, but that's different from extensions).

In short, there's only one version of DirectX and there are no extensions. Simple.

Microsoft added OpenGL support to Windows way, way back to make some CAD customers happy. They never updated it since. But OpenGL has an extension mechanism, so it could be argued that every modern OpenGL application on windows is still an OpenGL 1.1 application with "a few extensions".

While the attention is always on the latest and greatest, it's of course still possible to use older APIs. You can write OpenGL 1.1 programs (API from 1997) and they'll work fine. With DirectX I think at the time of writing you can go way back to DirectX 9 (from 2002), but there's fairly little reason not to use DirectX 11 (2009) if you're starting a new project, and it's only a matter of time before DirectX 9 support starts breaking, if it hasn't already.

The latest and greatest as of right now are Vulkan from the OpenGL camp and DirectX 12 from Microsoft. Both of these have the same basic idea of moving a lot of things that were traditionally graphics drivers' headache to the application. Good side with this is that drivers can't make a lot of assumptions about the application's behavior. Bad side is that now every application has driver responsibilities.

You've probably heard that some company's or other's graphics drivers suck? Now every time a major title is launching, the hardware vendors scramble to make sure the title works fine with their hardware, usually requiring new drivers. And if you're not a major title? Well, sucks to be you, I guess.

Remember when I said that there's only one version of DirectX? With DirectX12, there's (as of this writing) 13 versions of the device object alone. There's also agility sdk that lets you use new versions of DirectX 12 before they're commonly distributed. In addition, every major hardware vendor has hacked an extension mechanism into DirectX 12, even though it does not have an official one.

To make things even worse, Microsoft, the company that famously documents the undocumented has no formal specification of DirectX 12. I'm not aware of a conformance suite either. There's hardware lab kit, but that's not exactly end user friendly, and I have not been able to get much out of it. There may be a reference rasterizer - I know there have been for previous versions - but source code access to such is improbable.

Sigh. Did this rant have a point? Not really. It is what it is.

I've been watching the development of the graphics APIs over the years and they've become more and more challenging for new developers, and I foresee this situation getting even worse in the future. No, Vulkan and DirectX 12 are not where this trend is ending, there's still ways to go.

My concerns have generally been brushed aside saying that everyone just uses an engine these days and new programmers don't need to bother with it. Here be dragons and all that. I once posted an image of "hello triangle" in various graphics APIs side by side which got a bit viral. When the Unity install fee debacle hit earlier this year, a lot of people started looking around for alternative engines or - in some cases - just how much they need an engine to begin with. Many people realized the emperor doesn't actually need all that much in way of a wardrobe.

I think there's still need for programmer-friendly graphics APIs. They don't even need to include shaders. Or default shaders could cover a bunch of bases (textures + lighting). But they definitely don't need line stipple either. On the other extreme, all you need are shaders. I still think my lightweight OpenGL on top of Vulkan idea has merit.. but it's not the only way to approach things.

There's SDL library's planned graphics API abstraction that seems promising, but may be trying to do too much at once. Maybe it'll work out. And of course, Apple is living in their own world. I've heard good things about Metal, but it's unlikely it will ever get out of Apple's garden.

New Site, Who Diz?

October 29th, 2023 (permalink)

My old web layout was 13 years old. I had a few attempts of making a new layout, but they generally fizzled out. The problem is that my site has a lot of stuff in it, so any change requires a lot of work.

13 years ago mobile internet wasn't a thing, so the old layout doesn't work super well on a phone or a tablet. The redesign is based on bulma, which lets me create flexible layouts pretty easily. It did require some changes to get some features work consistently, though.

The old site was built using php, which may come as a surprise to some people. Yes, I ran php locally to generate these static pages. This was replaced by a python script that converts markdown source pages to html, with a bunch of custom bits. While converting the pages I also had a python script to do most of the annoying transformations, but basically every page still required manual work.

As an example, here's a regex that converts html links to markdown links:

re.sub('<a\\s+?href="(.*?)"[^>]*?>(.*?)</a>','[\\2](\\1)', content)

I also ditched google ads and analytics. It's not like this site is ever going to draw traffic enough to actually be profitable. Some ten years ago I did get enough ad revenue to barely cover hosting costs, but that has drained long ago.

Design-wise things are now way more bare-bones, with no fifteen custom fonts and so on, which was pretty painful when converting all of the pages.

I decided on markdown just to get rid of all kinds of custom things, to focus on the content instead of presentation. Some things may have been lost in translation, but maybe others have been gained.

Among other simplifying things, syntax hilighting is now done offline, featuring such classics as c,

// testing
void foo(const char*boo)
{
    printf("syntax color test");
}

..python..

# test
print("hello")

...and I wrote one for z80 assembly:

foo::
    ld b, 57
    ld hl, 0xa000
.localloop:
    ld (hl), 7
    inc hl
    djnz .localloop
    jp nc 1e37h ; test

The z80 hilighter doesn't bother with strings or af', so those two features can't mess each other up.

The markdown to html conversion is performed with marko and syntax hilighting with pygments, both of which required some additional hacks. I actually tried all python markdown->html converters I could find, and "marko" was the only one that more or less covered all the features I wanted. I mean, I could have used pandoc, but I wanted to keep everything within python.

After the markdown to html conversion, I do a lot of additional muckery, like adding those permalinks to news articles using further regexes, and finally inserting the content to the template (which itself goes through a bunch of transformations, like adding title and automatically generated "probably last changed" thing at the bottom).

Considering how massive changes I've done some things may be a bit wonky, but we'll get there. Eventually.

Sending MIDI data on ZX Spectrum Next

September 22nd, 2023 (permalink)

Continuing from the previous MIDI transfers, I've implemented sending data as well.

First problem was how to communicate the direction of the data, and this was solved by using the app control pins.

0  -> 15: quit
10 -> 12: move to receive mode
10 ->  3: move to send mode
10 ->  6: move to idle mode

In addition to the "quit" command, the nextpi-usbmidi server now listens to three different commands to switch modes between send, receive and idle (where the idle hopefully enables using I2S).

When the server switches to receive mode it sends "MID" bytes before sending any actual midi data; this lets the dot command receiving the data to synchronize. All of the data is in triplets.

When the server switches to send mode, it expects "MID" from the dot command before any midi data, for the same reason.

Yes, this means that we can't send and receive MIDI data at the same time. There's a couple new dot commands - .midipanic sends note off on all notes on all channels, and .midisend sends data.

Midisend dot command takes any number of parameters (up to 64 notes at once), each three bytes will send a midi command. The parameters can be integers or integer variables.

The earlier video is still relevant, as the Raspberry Pi binary has to be running for any of this to work.

And again, this is of limited use to casual users, but developers may find it useful.

All code is still on GitHub under basically public domain license, and prebuilt binaries can be found here.

USB MIDI on ZX Spectrum Next

September 16th, 2023 (permalink)

There's some music making software for the ZX Spectrum Next that could use MIDI input. I've heard of hacks where people convert MIDI into (text) keyboard input. But why not just... use the USB MIDI?

Note that the following skips very, very many experimental steps and a ton of research.

There's a Raspberry Pi zero in the ZX Spectrum Next (or if there isn't, it's possible to add one as an after-market upgrade). So far there hasn't been much use for it, but it does have an USB port.

There were a few hurdles to solve. First off, how do you compile stuff for the NextPi? There's no gcc on it, and even if it could connect to the internet, there's no (functional) apt-get, and even if there were, the distribution is so old that getting hold of correct binaries is tricky.

For the heck of it, I just ssh'd to my 3d printer's OctoPi, compiled a hello world there, sent that over to the NextPi (scp back to my PC, NextSync to the next, .pisend to the NextPi, launch in .term), and lo and behold, it worked. C++ binary did not, however; the c++ library on the NextPi is frozen at 6.0.22, and good luck finding cross compile tools that can target that.

I mean, I tried. I built virtual machines and tried to access old package depots etc, and the closest I got was 6.0.25.

Luckily you can link the c++ statically, which got me a binary that worked on the NextPi, but the result was a 750k binary. Annoying, but not a showstopper. Later on I got a development build of NextPi2 (the distro on ks2 Spectrum Nexts) which had a path to building smaller binaries with dynamically linked libstdc++, but more on that later.

I got RtMidi working on the NextPi via ALSA. RtMidi was familiar to me from before so I was happy that it was relatively painless.

The Pi is connected to the Next in several ways; the .pisend uses UART, but I wanted to use GPIO pins directly because I think that's easier (and potentially faster). After mapping out what all of the GPIO pins on the Next are used for and lengthy conversations with the Next developers on the Next Discord I ended up with one byte (which maps directly to one nextreg) for data and a couple bits elsewhere for ready to send/ready to receive bits. Discussions also led to defining a mechanism for the Next to tell the currently running server to clean up after itself and quit, which uses the last 4 bits.

Next I wrote test application for the GPIO, where it sent random data down to the Next, and Next counterpart that would read the data and blink the border. This yielded transfer rates of about 100kB per second, which was way faster than I expected. Basically the Pi was fast enough to offer data whenever the Next asked for it. Granted, the payload wasn't heavy (one rand()), but Next was asking data pretty much as rapidly as it could.

Talking of GPIO..

The GPIO situation on the next looks something like this:

  • Pins 0-1 are dead (used internally by pi0, afaiu)
  • Pins 2-3 are I2C bus (connected to RTC, whether it's used is another matter)
  • Pins 7-11 are SPI bus (connected to SD, not that anyone uses it)
  • Pins 14-15 are for UART (pisend, .term, etc; pretty critical)
  • Pins 18-21 are for I2S (used to play audio through next; used by tape player)
  • Pins 24-27 are reserved for app control; pattern 0000->1111 means "clean up and quit"
  • Pins 28-31 do not physically exist on the raspberry pi zero

In other words,

0  1  2  3  4  5  6  7  8  9  10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
XX XX XX XX          XX XX XX XX XX       XX XX       XX XX XX XX                  
-dead-I2C-          -SPI-----------       -UART       -I2S-------       -AppCtl----    

Since I really needed one 8 bit range, I had to pick between the one starting from 8 or the one starting from 16. I picked the one that overlaps with I2S, so I can't play audio while this thing is running. Can't have everything..

For control bits I used 4,5 and 6. 4 and 5 are to signal "I want more data" from next and "I have provided data" for the pi. The pin 6 is reserved for changing the data direction, a feature I haven't implemented yet. I may not end up using a dedicated pin for it in the end, and instead will define bit patterns in the app control bits to mark direction changes.

Finally I wrote a simple AY player that would take in midi commands for noteon and noteoff, recorded a short video and felt really accomplished.

By that point Xalior built a new development build of NextPi2 that I could use with a ethernet adapter and actually apt-get stuff from a legacy depot. I set up a separate raspberry pi 0 (I'm not messing with the one inside my KS1 Next) with said image, installed build-essentials and everything looked fine, until I needed to install the ALSA development stuff. There's a minor version dependency issue in the legacy depot which I had to work around, and after a lot of experimentation the following worked:

apt-get download libasound2-dev
dpkg -i --ignore-depends=libasound2 libasound2-dev_1.1.3-5_armhf.deb

This let me build a binary that was just 35k, as opposed to the 750k with the statically linked stdc++.

Next up was a bunch of cleanup, fixing, adding nice touches and, well, recording a YouTube video.

So what was the point? I mean, a 9 channel AY synth isn't super useful, right? I doubt anyone would want to use it like this, but it shows that the whole pipeline works. My code can be used by others as a starting point for, say, integrating USB MIDI support in a DAW, for instance. It also lays down groundwork for other things with the NextPi.

All of the source code can be found on GitHub as usual, under unlicense, so go wild. Pre-built binaries are here, although this is more interesting for developers than end users.

3d Printer

August 19th, 2023 (permalink)

So I turned 48, and bought myself a birthday present: a 3d printer. I've been pondering about this for a long time, watched a bunch of videos about it, done some web searches to figure out which one would be a good one, only to find to my frustration that every "best beginner 3d printer in 2023" list has completely different products in it.

The best way to learn about 3d printers is to just buy one.

I asked around on Mastodon, where #3dprinting hashtag gets a healthy amount of posts, and got suggestions for a bunch of different printers (and some suggestions against); Sovol SV06 was one that was suggested, and looking at the specs it ticked all the boxes I was aware of, plus it was surprisingly cheap, great reviews, etc, so I went with it.

The SV06 has a magnetic print sheet on a heated print bed, relatively easy levelling, dual z steppers and a bunch more things some more knowledgeable person might highlight. It does not have a filament runout sensor (the jury is still out there whether one is needed) and the fans are apparently pretty crappy. There's also one customization every SV06 owner is recommended to do, which is to add a cable strain relief thingy (there's many variants on the 3d print file sites) to the cable that attaches to the print bed. Apparently the cable may break.

The printer plus some additional tools (mostly for cleaning) came up to around 300 euros. On top of that I've spent over 100 euro on filaments (PLA and PETG).

I don't intend to print any ABS/vinyl any time soon. Or anything that might require a hardened nozzle. Might try soft filaments at some point for the fun of it.

Printing ABS/vinyl wouldn't be possible in any case without an enclosure, and even if I had one, they tend to release toxic fumes when melted, which doesn't sound like a fun time. I've used PETG for uses that have environmental requirements (like spare parts for dishwasher).

I wish there was a "everything you wanted to know about 3d printing" site out there - may be there is - maybe it's the Makers Muse's book which I haven't bought - since there's just tons of stuff you need to know. Well, "need" is a strong word here, you can get started with fairly little information like I did, and even the cheap 3d printers today come with so many convenience features that you get a few prints out of your printer before you have to actually start troubleshooting things.

The primary web store I've bought my filaments (and various tools) from actually has descriptions on different filaments, their attributes and requirements, which has been super helpful. When I've talked about my problems on Mastodon, a lot of people have been helpful in troubleshooting.

One particular thing the SV06 lacks is any kind of wifi support. There's an easy solution to this though, and that's OctoPrint. Among other things it comes as a Raspberry Pi image, so since I happened to have a Pi that I had bought some years ago for a project that went nowhere, I repurposed it here. And I can tell that web interface is way, way better than carrying a memory card around. In addition to that I can see some real-time stats, time estimates that are always hilariously off and web cam image (with time-lapse generation if I want). Most of the time the Pi uses less than 10% of CPU; while encoding a timelapse it's completely tapped, though. As it happens, even with the heatsinks I had installed the Pi was overheating, so I added a fan I had in my junk pile (probably harvested from a dead Wii) and that dropped the temperatures all around by 20C.

Now then, to the stuff I would have liked to know beforehand (but didn't know at the time)..

3d printing is pretty much all about temperature management. Filament comes in to the print head at (approximately) room temperature, hits the temperature break which fan number one is trying to keep as cool as possible (or as close to room temperature as it can). Next up the filament hits the extruder where it's heated up to printing temperatures (which typically range from 200C upwards). After coming out of the extruder to the print bed the filament is hit by fan number two, which tries to cool the part that's being built so the plastic sets. Finally, the print bed itself is heated (typically at 60-100C) to avoid warping the part when it cools, and also to keep the part from getting un-stuck from the bed.

On top of that there's printers with heated enclosures to further fight material shrinkage.

Tweaking these various temperatures (and fan speeds) is a large part of getting successful prints. A large, but by far not the only part. There's a lot of different variables you can adjust, both physical and software.

By far the biggest reason for failed prints for me (and we're talking a significant percentage of prints failing as I'm learning) has been insufficient adhesion. By that I do not mean using glue, but for example too cold bed, too narrow pieces getting unstuck when getting tall enough, etc. There's several ways to tackle these - one is to prefer flat objects (by simply rotating them), another is to slow down print speed (at least in the initial layers), adding small support structures at the bed level, adjusting the temperatures like I mentioned.. in case of ABS or when using glass print beds there's actual glue that's sold, but for most cases the modern print beds don't need anything, as long as they're clean.

And to clean you probably want to use some sort of cleaning alcohol. Which may or may not be easy to source depending on where you live.

Most of my failed prints also fail really early. A common advice is to watch your printer when it does the first couple layers, and that's solid advice. Since it's unfeasible to keep watch of the prints all the time (as they tend to take hours), I still listen to the prints. When a part gets loose, you can definitely hear it, and then it's time to go stop the print, toss the unfinished print into sorted-by-print-material bin with a sigh, clean the print bed, go back to slicer, tweak parameters and try again.

There's two main schools of slicers, PrusaSlicer that's derived from Slic3r which apparently was the OG of slicers, and then there's UltiMaker Cura, which started as a hobbyist open source project and then UltiMaker hired the author. There's probably some others too but the ones I've seen are more or less forked from one or the other, and the two tools are fairly similar. The SV06 comes with Sovol's fork of Cura, and that's a relatively old version. I'm planning on making a switch to either PrusaSlicer or the mainline Cura. Just after this one project is finished...

One fun thing that's happened in the 3d printing community in the past few years is... speedruns. People tweaking their printers to try to get the 3dbenchy boat to print as fast as possible. A side effect of this is that both hardware and software has improved, and the new kid in town is Klipper. This isn't something I have or am planning to move to (even though I'm pretty sure it would run on the Pi), but it's interesting nevertheless; Klipper works by flashing your 3d printer with firmware that turns it into a dummy terminal of sorts, and then the Klipper software takes over. Apparently doing this to very old printers lets you boost their speeds by insane amounts. There's also input shaping where accelerometer is used to analyse the particular printer and then movements are optimized so any vibrations generated by the print head speeding at stupid speed is eliminated.

I am planning on replacing the part-cooling-fan at some point, as apparently that helps with some printing issues. And I'm currently building an enclosure by making a variant of the popular Ikea Lack table hack. It would have been cheaper to get a tent for the printer, if enclosure was all I wanted, but I also want the printer to have a nice place to live in. What I did not anticipate is that the hack requires A LOT of parts to be printed, so I ran out of the PETG I was planning on using on it, and am running out of the next colour too. Luckily most of the parts have printed successfully (apart from a few early aborts).

Apart from all the hardware knowledge and various slicer parameters, I have also dived into 3d modelling a bit. When I asked around, OpenSCAD and FreeCAD were recommended. These two have wildly different approaches to making 3d parts, neither of which was familiar to be beforehand. I've also used blender as a general swiss army knife when I've had to make some changes to files downloaded from thingiverse or printables. I know there are commercial tools out there that are way simpler to use, but you can't apparently even BUY them anymore, but instead have to rent them adobe cloud-like with monthly fees. No thank you. And even if they were sold, they'd probably be way too expensive.

So all in all it's been interesting learning rabbit hole. And I'm sure I'm not nearly done.

Gall Bladder

March 12th, 2023 (permalink)

Health-wise, this spring has been a bit of a disaster.

This post has a lot of health-related things in it, so, content warning and all that.

Our household has averaged about one flu per week, in most cases the little one catching some bug in daycare, and then contracting that to the rest of the family. Add to that the global political situation, COVID still being a thing (as much as people act as if it wasn't), sprinkle in some extended family drama I won't get into, and my stress levels haven't been healthy either.

Which is foreshadowing to about four weeks ago, when I was submitting a pull request at work and got comments on it - completely routine - my stress just boiled over. I started shaking and my scalp felt like it was vibrating. I've heard enough horror stories about untreated stress reactions to stop working for the day and booked a doctor's phone appointment as soon as possible, who then told me to take rest of the week off and see a doctor physically.

That doctor then recommended me to take more sick leave (how much, I do not know, but judging from the reactions of other doctors I saw since, probably several weeks worth). I didn't want that as I had stopped working so suddenly that I was afraid I might generate some kind of phobia about going back to work. We did start on some mediation that should help, slowly, though. Medication that, among other things, initially made me feel like throwing up, and which somewhat messes up my internal signals so feelings like hunger feel strange.

Back at work things were stressful, and I was actually a bit scared going back. For the first couple of days I pondered if it would have been a good idea to take the sick leave after all, but then things stabilized.

Around this time I got my first attack. I had had a heavy dinner, and around bedtime, but before lying down, I felt like something wasn't agreeing with me. It passed in a couple of hours though. Later attacks, always after I had eaten something a bit more heavy late in the day, got worse, but still passed. When the attack was going on, nothing seemed to help; I tried lying down, standing, sitting, on my side, etc. Neither did painkillers, throwing up, or spending hours in the toilet.

It felt kind of like something was squeezing a knot in my chest, right at the middle, and kind of small; not something I would associate with a heart attack. More like anxiety. There was also a feeling I can't describe that was just... suffering, which got worse the more I stayed in one position, but never went away, until the attack passed.

Given that I had started on new medication recently, I suspected it was that, and hoped it would go away. After all, the initial nausea from the medicine had became less and less of an issue as days went by. I saw the doctor again, and she said that to her knowledge, the medication should not cause such issues.

I took a week off from work when the kids had their winter break from school. Coming back to work I jokingly said I could sleep for two more weeks. This was, coincidentally, about two weeks ago.

The fourth attack started at 3am, and right from the start it was clear it was worse than the previous ones. After a couple futile attempts at trying to get a taxi to the hospital (thanks, Anne Berner), I called 112 described my symptoms. An ambulance arrived very quickly.

In the ambulance the medics wired me up to check if I was, in fact, having a heart attack. Their conclusion was that it probably was not, but they hit me with nitro and morphine just in case, hooked me to a drip and sped me to the hospital.

Two notes on morphine: first, it did not help with the pain. Second: why would anyone WANT to be on morphine? It was horrible.

First aid at the hospital at 4am was very quiet; I was the only patient. So, I was given some more painkillers on drip (didn't help), plus a bunch of more tests, including more thorough heart exam to rule out any heart related issues. Eventually they ran out of obvious things to do and I was lined up for ultrasound, 10 hours later.

There was nothing to do but to try to sleep, text message relatives keeping them up to date, and listen. I was in too much of pain to use my phone for actual entertainment, plus I had not packed a charger, so I saved the battery. Around 7am other patients started drifting in.

It's no surprise hospitals are so popular TV fodder: they're stock full of stories. Between night nurses' gossip and various patients accounts, it's a veritable gold mine for writers.

There was the nurses' confusion about their new contracts and whether they're allowed overtime or not; elderly woman was worried about her dog, left alone at home while she was there; Another with suddenly paralyzed side and an impressive amount of people she needed to call with updates.

I listened to various devices beeping and pondered about sound design. If the devices had pure waveform beeps, it would get really annoying for people working there, but at the same time the sounds need to be pretty commanding when someone is, well, dying. And you want to recognize which device was demanding attention without having to search.

Eventually the time for my ultrasound came, and they found there's something odd about my gall bladder. No stones, but clearly irritated. Satisfied that's my problem, they prescribed some antibiotics, sent me home, saying I should pop by at the hospital closer to me in the morning to see if my situation hadn't improved.

At this point I had been up over 14 hours without eating (they couldn't give me anything to eat in case I needed surgery), so I stumbled my way out, managed to get a taxi, visited the drug store for my meds and then got home. My stomach felt like a crumbled ball of paper, and I did not feel like eating much.

I did not sleep much that night. I was feverish and stressed about making to the first bus to the hospital in the morning. I made myself a sandwich at around 5am, pondering about last meals.

In the bus I pondered whether I should try to pop by an optometrist after getting my results from the hospital.

I had been told to go to the first aid as early as possible to beat the morning rush, and that's what I managed to do. When I arrived around 7am, the first aid reception wasn't even technically open (even though the first aid itself is open 24 hours a day). After sitting down for a few minutes a nurse came to ask why I was there. I explained my situation, was directed to a different waiting area and even before I managed to sit down, someone was there to take my blood samples.

While waiting for the results I must have looked like a mess because another nurse asked if I wanted to lie down, which I gladly agreed to. I still felt awful, from sleeping, eating and drinking far too little the day prior. I was given a protein shake which blissfully calmed my stomach down.

The results from the bloodwork came along with a doctor, who said my values were a mess. I had to recount this tale to him, after which he went and called the other hospital to check what should be done with me. I was hooked to a drip again, since it was clear I was going to need intravenous antibiotics if nothing else. The doctor came back and said they'd be sending me back to the other hospital to be operated. There was nothing to do but to wait for another ambulance. So much for my plans for the day.

Around this time I realized it was really painful for me to breathe if I lay on my back. The trip to the other hospital I spent lying on my side, which wasn't fun.

Back at the other hospital the debate between surgery and using antibiotics went back and forth for a couple of days. Several series of blood tests were made, along with MRI. I got hold on a phone charger and started spending time listening to podcasts. After exhausting Tom Scott's Lateral, I started working my way through Beck Hill's and Matt Parker's A Problem Squared, re-living the initial Covid lockdowns.

MRI was an interesting experience. I'm not surprised that if you have even an inkling of claustrophobia that it's terrifying. You're put into a tube and it's LOUD. I noticed at some point that the MRI changed its behavior based on my breathing pattern, and, since nobody told me not to, played around with it. I'm not saying you should. But nobody mentioned about it later on, so I guess it was okay? Breathing on my back was still painful, but getting through the tests was more important.

The MRI results agreed with the ultrasound: something was wrong with the gall bladder, but there were no stones, which the doctors found strange. The doctor in charge said that even if they managed to treat it with antibiotics, the organ would be nonfunctional (either due to or despite the treatment) and it would be better to remove it or my symptoms would likely repeat.

So, after another two days of not eating I was finally scheduled for the operation. I returned my borrowed charger to my hospital room mate, wished him the best in case we wouldn't see again, and got prepped for operation.

In the operation room, after confirming the right person was in for the right operation, I was given oxygen and told to take deep breaths. Around the fourth inhale the ceiling tiles suddenly changed to have huge, colorful, blocky R letters on them, and I had completely forgotten where I was.

I realized I had been in surgery and tried to shake myself awake. I was seeing double, but clearly not in the operation room anymore. Instinctively I checked if my fingers, hands, feet worked. I was really confused and messed up. The rational thing to do would have been to try to sleep it off, but some proto-human inside my skull was like, we've been poisoned, we have to get up and fight.

I think I may have asked some slightly inappropriate questions from the waking room staff.

Back in my hospital room, after the worst of the drugs had worn off, I was finally given something to eat - mostly liquids but I was glad to take it. My wife dropped by with a proper charger and some light reading that I requested. Over video call my brother commented that it was the only time he'd talked to me while I was on drugs.

Over the next couple of days the painkillers given to me over the operation finally wore off. I was a wreck, but an understandable one. I also noticed I could breathe lying on my back again.

And then, after making sure I could pee again (a complication I would definitely would have preferred to skip, thank you very much), I was sent home for another week of rest, before allowed back to work. As I'm writing this I'm still a complete wreck and have no problem taking it easy. I hope I'm well enough on Monday to get something done at work.

Looking back, I've probably have had some related symptoms for years; odd pains when breathing sometimes, painful to lay down in certain positions for no apparent reason. Maybe they've been related, but it's hard to say. At least the body location for the pain has been similar to where they operated.

Everything considered I've been incredibly lucky. I hope the attacks were related to the gall bladder, but only time will tell.

MMXXIII

March 11th, 2023 (permalink)

Another year, another slow start at posting..

Lots of things changed last year. Hosting for this site, for one. The startup I was working at got sold to Intel, which was interesting. No, I didn't get rich; didn't have any company stock.

I still tinker at zx spectrum next stuff. Maybe I'll get something finished this year. Here's hoping.