Programs have data.
Let’s apply the fractal principle to data and see what emerges.
Let’s divide data into two divisions and dig into the divisions more deeply.
Several PLs provide getters and setters for data.
A setter function creates a piece of data and sets it to some value.
Q: Does the setter check the validity of the information? Or, is that operation moved elsewhere?
Using fractal thinking, it becomes “obvious” what a setter should do.
Setting becomes 2 operations:
1b. Raw set
The operations can be pipelined to make a type stack.
Setting should not be conflated with validation.
We are familiar with the concept of validation - many web-based forms use some kind of input validation.
The concept of getters is not specific enough.
A person reading the code can see that data is being fetcheded, but not why the data is fetched.
We get data for a reason.
Typically, we want to query the data in some manner.
Querying can be a simple get, or, it can be a more involved operation, maybe a rule involving the value of the data, or a rule involving many values of many data.
Getting, also breaks down, fractally, into two operations
2a. Raw get
The fractal concept has no bottom. Each of the above operations can be further sub-divided, recursively1.
For example, raw get might further be broken into
- getting from memory
- getting from a database.
Likwise, query might be further broken down into
- querying information from memory
- inferring new information based on information from memory.
Conflation of Get and Set
Many PLs provide getters and setters but don’t further specify how the data is validated nor why the data is fetched. These, otherwise simple, categorizations are hidden in code. The reader needs to reverse-engineer the validation and querying intent from the code.
This kind of code conflation creates complexity. The Designer knew how and why he was doing certain operations, but lacked the PL syntax to communicate the Design Intent to future readers of the code.
[Aside: I argue, in other essays that Architects should invent SCNs to describe their DI.]
We stop sub-diving a problem when the sub-divisions are “good enough” to solve a problem, not when we “hit bottom”. ↩