I was invited to advise a lawyer friend.
He wanted a website built for his company.
Two weeks went by and the website project was not yet finished.
He brought the software contractor in and expected to run him over the coals, with my assistance.
When the contractor described the project and his current status, I ended up agreeing with him.
The lawyer was flabbergasted. Two weeks, for a simple brochure? Two weeks of paying consulting rates?
I had to explain to my lawyer friend that software development is not built out of black boxes pulled from a shelf. This was a custom job and that he should not expect the job to be finished yet.
My lawyer friend did not like my answer.
I gave a demo and showed a working version of some software to a client.
His techie grilled me on how I used mutexes in my software. I told him that I didn’t use mutexes, at which point he declared that the demo was faked.
I was a liar and couldn’t possibly have finished the development work in so short a time.
I have a friend who works in the Film & T.V. industry.
He dabbles in software development.
I asked him how his company manages projects.
His response was “Why do you want to know? These ideas can’t be applied to software development.”
After more prodding, he told me that Film and T.V. Development was based on a divide and conquer strategy. Work is farmed out to contractors and the contractors return results as assets (3D graphical objects, in this case).
My friend has created a product for the equestrian world.
The product uses some 40 CPUs.
Each CPU has only about 16K (K not M) of memory.
My friend does not have problems with multitasking.
Order of operation should not matter to produce a given result.
But, order does matter in software.
I show this problem in https://guitarvydas.github.io/2020/12/09/CALL-RETURN-Spaghetti.html.
In that essay, I show that we cannot rely on the operation of even a simple 2-box system of software.
Real black boxes don’t work that way. If we have two black boxes plugged together, then we get the same result every time, regardless of how the innards of the black boxes (and the wiring between them) is implemented.
This does not happen in software.
Everyone thinks that software is built using black boxes.
The Mars Pathfinder problem was caused by priority inversion.
This problem was caused by the use of an RTOS.
An RTOS is a stripped-down operating system.
The RTOS was built using “best practices”, but, these best practices led to a hoary, intermittent bug. The cause of the bug was not understood at the time of construction of the Pathfinder software, but it was always there.
Priorities were invented to ameliorate the problems of time-sharing.
Time-sharing was invented to ameliorate the problems of CPU expense.
Priority inheritance was invented to ameliorate the problems of priorities.
[Q: Why was it possible to enable/disable priority inheritance? What niggly problem was that supposed to solve?]
To read more about Epicycles, read Arthur Koestler’s “The Sleepwalkers”. The book documents the switch from Ptolemaic Cosmology to Copernican Cosmology. The Ptolemaic scientists formalized the concept of adding baubles to the existing theory without fixing the root problem. They called this formalism Epicycles.
To avoid thread safety issues, give each thread its own isolated CPU.
When CPUs are isolated from one another - they cannot interact, except via very constrained channels.
There are no thread safety issues.
Thread safety is accidental complexity.
Thread safety is not essential complexity.
Thread safety is caused by an optimization - the attempt to share memory across many tasks. This attempt at premature optimization leads to new problems, i.e. accidental complexity.
To avoid fairness issues, give each thread its own isolated CPU.
When CPUs are isolated from one another - they cannot interact, except via very constrained channels - there are no fairness issues. Each CPU runs at its own speed.
Fairness is accidental complexity caused by an optimization (sharing the CPU for many tasks).
Fairness is not essential complexity.
The only issue is: are the CPUs fast enough to accomplish the given task?
To conquer multitasking and to make multitasking easy to use, give each thread its own isolated CPU.
CPUs cannot share memory.
CPUs cannot time-share.
Computer Science clings to the notion of using Recursion and Loops.
Threads were invented to accomodate time-sharing.
To accomodate deep recursion and loops when using threads, Computer Science invented full preemption.
Ironically, loops make no sense on the internet.
You cannot “loop” a pair of distributed computers, you can only send messages between them.
Full Preemption was invented to accommodate loops (and recursion) on threads.
Full preemption causes many accidental complexities,
Full preemption is only needed to simulate multiple CPUs on a single computer.
To get rid of full preemption, just give each thread its own isolated CPU and its own isolated memory space.
We, ultimately, want true distributed computing.
We could do anything if CPUs were free.
Multi-core CPUs are but a half-measure towards achieving the above goal of truly distributed computing.
Multiple cores were invented by hardware designers who were tired of waiting for software to catch up.
If we don’t have enough CPUs to go around, we end up simulating CPUs by using VMs and Threads, or, better yet, we can simulate isolated CPUs (something software professionals tend not to do, out of a zeal for premature optimization).
CPUs used to be very expensive in the 1950’s.
Simulation of multiple CPUs was invented early to ameliorate this expense.
This kind of simulation - we call it threads - has led to a myriad of accidental complexities.
These complexities only exist in the simulations, and disappear entirely if we allocate enough CPUs to the problem.
A large portion of Computer Science consists of the analysis of accidental complexities caused by the imposition of the epicycle we call threads.
Early Computer Science, also, tackled the ideas of how to structure data and wrestled that issue to the ground.
Yet, the issue of threads continues to wriggle out of Computer Science’s grasp.
This is a tell.
 If TDD had succeeded, then all software professionals would be using TDD exclusively. They don’t.