[erlang-questions] Microservices vs. Erlang processes
Wed Jul 9 11:40:45 CEST 2014
On Monday 07 July 2014 23:54:41 you wrote:
> Hi Craig,
> >The idea of using actual micros for "micro"-sized services is
> > intriguing, but the implications seem to be way outside the realm of the
> > average web-style VC's imagination.
> This is an "intriguing" statement. Can you please elaborate on the
I appologize in advance for the huge and unorganized nature of the post below.
Anyone not really interested in this, please skip reading it instead of
flaming me for verbosity.
Blurring the lines between physical infrastructure and software infrastructure
can become very natural, but it requires a mode of thinking to which neither
software people and, say, building architects are accustomed. This is what I
am getting at when I vaguely refer to "implications".
Imagine a bridge where each truss, buttress, suspension cable socket, surface
segment, etc. are aware of its location in relation to other pieces, and where
each joint was an input device to the larger segment. When actuating joints
are used (not unusual at all in large structures, but I've never seen any
actuating joints that took readings on their current angle, load/stress,
etc.), it suddenly becomes possible for a structure to deduce what external
forces must be acting on it to cause its recent collective actuation history
-- and if even some of the joints are powered (this can take many forms) it
would be possible for a suspension bridge to actively dampen the torquing
effect of wind, windmills to actively feather, segmented road designs to react
to ground saturation, etc. These ideas have occurred to other people, of
course, but the approach to solving the implementation problem has always been
procedural at the code level, centralized at the processing level, and
ultimately unsafe to build because handling partial failure involves too many
cases to handle when looking from the top-level.
The same ideas necessary for large structure segments to do things well
together turn out to be the same ideas necessary very small structure
coordination -- spray-on screens, microcontrollers mixed into concrete
aggregate, etc. Self discovery, "relative -> definite" orientation discovery,
etc. permit a very "sci-fi right now" type design mindset where the difference
between physical architecture of everyday devices and the software
architecture that makes those devices smart melts away. (I wince at using the
term "smart" here -- it is indeed what I mean, but the term has been already
so abused that it requires a semantic rebirth.) Any future where we can mimic
organic brain activity by recalling ad-hoc paths through physical, incidental,
aggregate storage instead of directly retreiving data sets of a known/expected
type from a dedicated block device requires this sort of thinking.
Consider a current-day "smart" coffee maker. It is first a coffee maker,
entirely dumb. Then the "smart" bits are tacked on as an afterthought. They
are entirely alien to the actual coffee maker. There is very little difference
between having an old-fashioned coffee maker controlled by a robot and a
"smart" coffee maker. Mr. Bean's morning routine is an equivalent solution
from a technical point of view.
A truly smart coffee maker, on the other hand, would have a point of view. It
would not just know how much water was in the reservoir, it would *ask* the
reservoir how much water it had available. This means the reservoir would be
"smart" as well. If it provides water, why is it only providing water to the
coffee maker? Why isn't this part of the building's water supply utility? This
makes a lot more sense, actually, but so far nobody has approached kitchen
appliances in this way and so a water provision protocol has not yet needed to
occur to anyone. And so on.
There are numerous chicken-and-egg problems involved here, of course, but
handling chicken-and-egg situations gracefully is part of what true robustness
is really about anyway. Incidentally, chicken-and-egg problems also provide an
opportunity to sell two products in place of one: a standalone water device,
or "smart provision" add-on to existing water coolers or sinks, in addition to
the coffee maker itself. This last point and and consumer case along the lines
of the coffee maker example is probably the only hope we could possibly have
today of interesting VCs in relooking "micro" services of this form. (I just
pulled the coffee maker thing out of my nose, by the way, its not a well
thought out idea -- but its sort of a fun example to toy with, because it
turns out that finding points of interface among common problems in
living/working space management can indeed begin with something so trivial.)
These are just elementary examples, my point is that truly "micro" sized
services embedded in devices which are themselves micro-sized opens a new
style of thinking about the concrete problems of the physical world that
computers alone are incapable of dealing with and inflexibly built "smart"
systems are incapable of evolving around without inordinate amounts of
constant retrofit/reprogramming work.
A lot of time is spent today telling a microcontroller to do exactly procedure
"start; A(); B(); C(); end". Usually in C, assembly, or some special extension
of C that embraces special hardware features directly. All of that work is
obsolete when the scope of the job changes, of course. A lot of time is spent
elsewhere writing Java or Objective-C programs for our "smart" phones. Neither
system is adequate to take the state of computing a step further than it has
already been in the last several decades, because (ironically in the phone
case) the languages themselves and the runtime environments within which they
execute are simply not designed around the realities of massive concurrency or
even particularly well suited to elementary message passing! These languages
and environments *still* require special encantations be uttered to take
advantage of more than a single processing device or storage location!
This whole mode of thinking is drastically underpowered in the face of problem
spaces larger than "send an email", "open a web page", "run game X",
"open_valve()", "print_board()" etc. Sadly, these are the same problems we've
already been stubbing our toes on for the last 50 years in software. The only
thing that makes the world seem any "smarter" or "more connected" today is
that the boob-tube has been renamed to "youtube" and Bruce Wayne isn't the
only one who can afford powerful computers in handheld form. What all this
computing power is doing, though, is simply wasting cycles in social
distractions, mostly as a cover to sell things nobody knew they wanted (or
even existed) before the internet or as a cover to track what they do.
The problem of creating truly intelligent infrastructure, reactive
environments, self-recovering *physical* systems, etc. remains, and it remains
a space that has been so far only scratched at around the edges, and never
really tackled head-on. Since the advent of the web this set of problems seems
to have been forgotten entirely, replaced by dreams of quick cash and
notoriety. "Entirely" forgotten outside military research, I should add. But
they are stubbing their toes as well. In Ada.
Something like Erlang/OTP (and I really have to include the OTP part here)
enables a different way not just of thinking about concurrency, inter-process
communication, "process" defined the way "object" used to be, etc. but a whole
different culture of thought. The Erlang culture hasn't interfaced much with
hardware culture, and "hardware" culture hasn't interfaced much with
mechanical, civil, etc. engineering culture, so there hasn't been a lot of
cross-pollination yet. Someday this sort of thing will be natural, but for now
its very unusual to meet a programmer who is also a hardware developer who is
also a construction engineer.
The magic blocking heisenproblems that have really been holding things up have
been concurrency, embedded processing, and routing/addressing across
heterogenous, autonomous devices. If this is what we would take "microservices
architecture" to mean, which is very nearly what more than one member of this
list took the phrase to mean on first reading, we would be headed toward a
closer relationship with the physical world and perhaps less enthused by the
possibility of using Erlang/OTP to back-end yet another market analytics site
or social media platform or CMS platform, etc. on the web.
OTOH, the brain is the largest erogenous zone and the web targets it readily.
Much like rats given the choice between simulated pleasure stimuli and food,
it is quite possible that most humans are more interested in being spoon-fed
simulated realities than actually dealing with the real world on its own terms
in the most powerful ways we can muster.
I'll close this huge and disorganized post with an excerpt from Wikipedia's
page on the Fermi Paradox
> They are too busy online
> It may be that intelligent alien life forms cause their own "increasing
> disinterest" in the outside world. Perhaps any sufficiently advanced
> society will develop highly engaging media and entertainment well before
> the capacity for advanced space travel, and that the rate of appeal of
> these social contrivances is destined, because of their inherent reduced
> complexity, to overtake any desire for complex, expensive endeavors such as
> space exploration and communication. Once any sufficiently advanced
> civilization becomes able to master its environment, and most of its
> physical needs are met through technology, various "social and
> entertainment technologies", including virtual reality, are postulated to
> become the primary drivers and motivations of that civilization.
More information about the erlang-questions