If software worked and was good in 2005 on pcs with 2gb RAM and with CPUs/GPUs vastly worse than modern ones, then why not write modern software like how that was written? Why not leverage powerful hardware when needed, but leave resource demands low at other times?
What are the reasons for which it might not work? What problems are there with this idea/approach? What architectural (and other) downgrades would this entail?
Note: I was not around at that time.


It would probably help but also hold you back. It would help by forcing you to stay inside a tighter memory envelope and accomplish your tasks with less compute.
It would hope you back by not having newer extended instruction sets available, thus depriving you of probably more efficient shortcuts than you could implement in software alone.
Depending on the goal of your software, the latter might not matter that much.
I mean writing as if for older hardware’s power level but actually intended/compiled for non-old hardware
That’s the tighter memory envelope.