From Accounts and Groups to the Cloud: A Personal View of Computing’s Transformation
I started programming on the HP 3000 in the 1980s, in a world where the boundaries of the system were clear, and I could see and control every part of it. My files lived in ACCOUNT.GROUP.FILENAME form — a structure so consistent I could picture the whole storage layout in my head. If a program needed a different input file, I’d use a single :FILE command to redirect it, and everything worked without recompiling.
The workflows had a tangible order. We’d read from sequential files (SEQ), update KSAM indexed files for fast key lookups, and store master/detail relationships in TurboIMAGE databases. In COBOL, the data division matched the record layouts exactly, and the procedure division mapped directly to the way data moved between those files. If something failed, I didn’t need a ticketing system or a cloud console — I just checked the error log, maybe ran LISTF ,2, and fixed it myself.
Each layer of that system was transparent. KSAM’s .DAT and .KEY parts were physical files I could see and manage. TurboIMAGE datasets had fixed capacities I planned ahead of time. Batch jobs were plain text streams of commands — I could read and understand them like a script for a play I’d performed a hundred times.
Today’s computing world is built differently. Files might live in an object store I never see, behind an API I can’t query directly. Code runs in containers, on virtual machines, on hypervisors — each a layer further away from the hardware. Data is often stored in databases that an ORM decides how to create and index. Instead of a simple file equation, there’s a web of configuration files, environment variables, and deployment scripts.
We’ve gained enormous power. A single modern system can scale across continents in minutes. Data replication is automatic. Integrations that once took weeks now happen in hours. But the trade-off is distance. The craft of knowing exactly where your data is, how it’s stored, and how each program touches it has been replaced by trusting multiple invisible layers to handle it for you.
For me, the HP 3000 wasn’t just a machine — it was an environment I could fully master. I could follow a transaction from the moment it entered a SEQ file to its final resting place in an IMAGE dataset, and I could change the path along the way if needed. That sense of ownership and complete understanding is harder to find today.
Still, the heart of programming remains: turning logic into action, and making a system do exactly what you intend. The tools have changed, the layers have multiplied, but the feeling of watching a clean run with no errors — knowing you made it work — is as satisfying now as it was when the HP 3000 ruled the room.