If you’ve ever tried to get a vintage computer game up and running on a modern system, you’ve likely been shocked at how fast the game ran. Why do old games run out of control on modern hardware?
Earlier today we showed you how to run older software on modern computers; today’s question and answer session is a nice compliment that digs into why some older software (specifically games) never seem to work right when you try to run them on modern hardware.
Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites.
The Question
SuperUser reader TreyK wants to know why old computer games run crazy fast on new hardware:
So what’s the story? Why exactly do the sprites in old games blaze across the screen so fast the game becomes unplayable?
I’ve heard that this is related to the game depending on CPU cycles, or something like that. My questions are:
Why do older games do this, and how did they get away with it? How do newer games not do this and run independently of the CPU frequency?
The Answer
SuperUser contributor JourneymanGeek breaks it down:
If you’re curious about how the actual code was implemented in early computer games (and why they adapt so poorly to modern systems without being sandboxed in some sort of emulation program), we’d also suggest checking out this lengthy but interesting breakdown of process in another SuperUser answer.
They also took clever shortcuts based on those assumptions including saving a tiny bit of resources by not writing internal timing loops inside the program. They also took up as much processor power as they could – which was a decent idea in the days of slow, often passively cooled chips!
Initially one way to get around differing processor speed was the good old Turbo button (which slowed your system down). Modern applications are in protected mode and the OS tends to manage resources – they wouldn’t allow a DOS application (which is running in NTVDM on a 32-bit system anyway) to use up all of the processor in many cases. In short, OSes have gotten smarter, as have APIs.
Heavily based off this guide on Oldskool PC where logic and memory failed me – it’s a great read, and probably goes more in depth into the “why”.
Stuff like CPUkiller use up as many resources as possible to “slow” down your system, which is inefficient. You’d be better off using DOSBox to manage the clock speed your application sees.
Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.