Architected Futures™

Tools and strategies ... for boiling the ocean

Use Case: Migration Planning

Background

This is an attempt to provide a "system specification in a box." This is the algorithm the AD was conceived to enable: migration planning from now to some point of theoretical reality in the future. If something is like this now, how can it be transformed to something different (today +/- some feature or condition). For mechanical and near term, the process it too complicated to address in this manner. But for long-term, there are algorithms that can be defined, that can make navigation through change much smoother than current world conditions. Once primed, the algorithm operates as a continuous refinement, reinforcing loop. It becomes more accurate as it is used. Thus, over time, the issue of a lack of "range" for the algorithm into the near term is less and less of an issue. Variability becomes less erratic, and harmonic balance reasserts, so diversity becomes productive rather than alarming. These are the result of natural system behaviors when operating in normal ranges of conditions.

I'm writing this in early 2017 in the United States of America. Change is constant, and erratic. Hypothesis: "a computer program could run societies better than current legislatures, executives, and judiciaries." Actually, that's not my suggestion. But it makes an interesting test case for AI. That's part of what got me moving on this last year.

First: the reason it's going to fail. Because, while the software will be able to (eventually) perform extreme high quality predictions of what might happen under expected scenarios, real life is more complex than the machine can ever possibly know. And, more so, machines can't independently conceive beliefs, vision and values. And those are societal drivers. The algorithm works on a feedback loop. It has to be told something before it knows about it. And there are always things that are new that are being discovered, or identified as significant, that the algorithms didn't know about before. And societies change their beliefs and vision, and values. Societies mature. What computers know is coming from people. People feed computers what they (the people) want the computer to "know." All the computers know is what they've been told, and how they've been told to use it. Garbage In - Garbage Out. Better information in - better information out. They don't innovate. If they did, they wouldn't work. They would be broken. You don't want a computer thinking that it's okay sometimes to multiply incorrectly and come up with wrong answers. (That's a real experience from my past on an early hybrid model of a computer1. It caused a lot of grief.) Innovation is bad for computers. Because of that, they don't know how to handle situations that they've never been in before. But for government, making laws, making judicial decisions, administering the laws, defining policies, managing human behaviors, you need to be able to innovate. Or, bad stuff happens: the other group bullies you, things turn out unfairly arranged, systems run amuck, things don't get repaired, etc.

Whether the hypothesis fails or not is theory.  (How far that goes in long- long-term forecasting is a subject for innovation debate. You never know until you're there. First you need to get there. You need to demonstrate it. Then you can see how far it can be pushed, although you should have a good idea before you put it in practice, as a saftey check.)

But, you don't have to prove theory to be useful. An intelligent machine can help you make better decisions. There is a range of propblems where thinking tools can be very helpful. Calculators are are a big help for lots of people. Bill pay systems are very useful. Personal Information Managers (PIMs) were an early success story. And now most phones are very powerful small computers. So, an architecture for an automation system that could provide high quality intelligence related to risk for planning and operational support, for governance related to system optimization; and compliance management related to policies (and potentially fit in a small footprint as a personal device) seems like a good idea to pursue.

We already have applications that do this in small ways, but they are not well organized as a general tool. And the better organized tools tend to be complex and are difficult to afford and use effectively.

There are a lot of uses of computers as intelligence assistants that we already have in place. In pieces. We run the world economy using computer models. Computer models which can be used to express lots of viewpoints about how things relate and what might happen, and the models identify what, under those and other known conditions, "might" happen.Some people do similar functions at a family or personal level with personal financial planning programs. And it also happens at various levels of federated enterprises including both governments and NGOs. But at a different "scale" in terms of detail awareness. The more we know within the scope of our environments and concerns, and as better quality the information gets used, "might" turns into stronger, sometimes much stronger, opinions. We use weak opinions for personal forecasting, unless you can afford a financial advisor, and much stronger opinions as your concerns get larger. And, to get really good opinions you need well thought out models. Not all forecasting models are about economics. But economics interacts with a lot of other models. It is a good base platform for relationships, but it isn't the only one. Not all value judgements need to be, or should be, based exclusively on economic factors.

Problem Statement

The essence of the hypothesis is that a computer program could "run" a society of humans. This is a form of concept that used to be presented in a science fiction novel. Usually a dystopian novel of some sort. In 1940 is was science fiction. In the 1960s it moved from science fiction to a social experiment in the form of Stafford Beer's Cybersyn Project. As envisioned by Beer, the computer is not truely responsible for "running" the Chilean economy. It is used as a technical advisor to humans who actually "run" the economy. Running implies "management." Fully running something, requires full management control with predictable results, and the ability to handle the unexpected when it occurs outside the bounds of predictions. It involves the functions of planning, task and resource organizing, direction setting and process control. There is a more abstract and ambiguous side to planning and direction setting, and a more focused, specific, physical side to operations and process control. 

In the 1960s robots were science fiction. Today, robots are a very real part of life; and becoming more involved all the time.We use robots very well, and often, on the focused, specific, physical side of the management equation. We have some use, but much less so, on the abstract side. But that is changing rapidly. That is why the hypothesis is not a "fantasy" question, but a real, practical, hypothetical. A key issue is the ability for forecast and plan, if effect, whether a computer can serve as a design assistant for complex social system. Stafford Beer developed a conceptual model to help with that process. It is called the Viable System Model (VSM). VSM is used in support of organizational consulting, and is part of our technical program design. 

Planning as Process

There is a lot of similarity to forecasting. Fundamentally these are all forecasting problems that can be evaluated with a general engineering pattern, or model. Variations of:

  • What is the situation right now?
  • Where do I want to be? or, Where do things appear to be moving? or both
  • How do I get there? or, How strong are the net directional forces? or both
  • And, sometimes the models care about Why. (Secondary factors to consider, or looking for immediate, quick resolution?)
  • Sometimes they car about Who (Roles? Related or primary players? Other involved parties? Are there "innocent civilians" who might be harmed?)
  • And, sometimes intervening timing issues are important. (Continuous or Discrete? Schedules? Pertinent events?)

Situation appraisal involves:

  1. The current state
  2. The expected or desired end state as a target to be solved for 
  3. Transmutation information concerning change from #1 to #2,
    1. If #2 is a predefined target, we want to compute potential paths, and recommend the "best2 paths" for achievement based on value criteria.
    2. If #2 is to be computed as an "unconstrained" variable, you need as input, the significant components of one or more models that define how the environment acts on the system under different conditions, and you need statements of expected values for those conditions. This is how flight vectors are calculated to account for wind and other variables. These are physics equations, based on parameters.

We don't answer, or even investigate all aspects of this process on all problems, because each aspect requires special programming, and data, etc. That's true when people do the planning. And it's true when computers are used as planning tools. For computers we specialize our current AIs. And we have to calibrate them. I have heard it expressed that the question around AI in automated cars and trucks is a question about what the "kill rate" is that we will find acceptable. (Driving is a dynamic planning exercise.) Automated cars, on occasion, will probably kill people. But, so do human drivers. So there a balancing equation that needs to be made. That is a final decision is for a set of humans to make. A computer cannot make the policy. It has no concept of what the policy is about, defining a significance figure to a human life (allowable deadly accidents3 per human). But humans do that for all manner of reasons today, especially in medical procedures, or airline safety, or counter insurgency operations, etc. The issue with cars and trucks is similar to airline operations. Computers drive a lot of planes, especially the big ones, where lots of lives are at stake. Computers are, or can be, safer drivers than humans. But humans have to set policy. In that space, computers don't know what their doing. That's why the military has trained "drone operators" for weapons. But it's also why you need a trained drone operator for doing search and rescue. Those are difference issues, but in both cases you need human oversight and human direction.

So, the specification for Architected Futures is the architecture of a form of "artificial brain" that can be used to help humans to make intelligent policy decisions. Not to make policy or enforce steering decisions. Within that scope, effectiveness feedback about past recommendations creates a reinforcing loop to improve planning model effectiveness; and control monitoring provides indicators on general operations effectiveness. This is not new. We do this today. But we do it with narrow focused AI systems. For things like electronic trading, heart pacemakers, electric power grid monitors, etc. What is new in this proposal, as far as I can see, is the aspect of the generalization of the model and the effect that has on dealing with change on a wide range of issues. What it allows, is the integration of models for dealing with complex systems.

The Challenge

What we want to do is to give the computer system insight and understanding so that it can assist one or more persons make good policy decisions. We then want to reflect on those decisions as they are put into effect, and measure their effectiveness in terms of desired outcomes. And, when problems arise, we want to know how best to address them. Was it an aberration that we can otherwise ignore, or is it a systemic problem? If it was systemic, is the model wrong, or is the model insufficient in understanding? The core challenge as I see it is the following:

  1. Given a problem, or a question about how something works, or a desire to do something, how do you go about, with research and social interaction, and computer assistance, solving it, find out, or manage the process of navigating to goal accomplishment?
  2. How to map a system that uses systems thinking, intelligent discourse with other humans, and whatever knowledge processing might be able to be made available from a cloud-connected device … to answer that question?

The first question is a general systems thinking question related to the challenge. For the computer to be of value as a subsystem, an assistant, we need to establish context. The computer is an assistant. Given an engineering challenge (defining how something works, correcting a defect, or planning an improvement) on some system, how well can a computer help a group of humans to resolve the challenge? How "smart" can a computer be in understanding complex issues involving human circumstances? This establishes a context for the challenge.

The second question is the direct technical challenge. The design of a computer algorithm for a form of artificial brain. An algorithm that understands a general systems thinking abstract problem, and can work the problem to suggest quality engineered potential solutions. And, can potentially work those solutions to varying degrees of thoughtfulness, depending on parameters and choices that have been predetermined. (That's the design limitation. Given parameters, and predetermined policy choices. The Achilles' Heal on the hypothesis is negating the need for human controls. Computers can't handle complete unknowns. Sooner or later there are always "unexpected" conditions, or conditions outside of control ranges, or conflicts between policies that cannot be algorithmically resolved.)

Insights

Core to the challenge is the separation between the two questions, and the similarity of the two questions. Fundamentally there is a binary component with different specialities on each side of the divide. And a need to bring the two together for a unified solution. If two or more humans try to solve a problem, they must come to a "meeting of the minds" in terms of what the problem is, and then apply thinking from their different viewpoints. For complex problems the "meeting of the minds" can become very difficult, which makes the second part largely argumentation where people "talk past each other." Adding a computer doesn't help. Witness debates on "climate change" which include very sophisticated computer models.

  • Insight: Before using physical models to argue physics, there needs to be agreement on conceptualizations of the problem space. This includes a definition of "what is the problem space."

This isn't hopeless because fundamentally we have schemes for synchronization of conceptualizations around problems. We call them models. have a number of very general models that can be used to describe other models.

The rest of this material discusses the design for a intelligence augmentation system to address the challenge from within those operational bounds and insights.

Architecture Program Overview

Architecture ProgramThe overview diagram for the architecture metrics program, which is used to manage the architecture program, is shown on the right. In the center of the diagram are a series of visual panels with a representation of the state of a system as it operates at different points in time. On the surface, in yellow, is an "as is" system. That defines our model for "how things work" today. Behind that are a series of panels that shade to green. Those define a series of additional, "specification defined" contexts indicating how we want future versions of the system to evolve.

Multigenerational planning is the algorithm we use to guide evolution through that space. From "as is" to "to be."

MGP

Migration planning is accomplished under the general form of an multigenerational plan (MGP). There is a standard mapping model that can be computer generated based on an understanding of the model types which need to be compared. Given two models, assumed to be an "As Is" model and a "To Be" model, and a statement about various features to be found between the two models; a general engineering formula exists to compare the two models. [This is engineering for both problem determination and design (planning).]

For each variable that exists in both "as is" and "to be" with no change in specification, leave the element alone and do not change it. If it needs to be moved or modified for transition, it needs to be restored in the new environment.

Any "as is" that is not in "to be" needs to be eliminated or mitigated.

Any "to be" that is not in "as is" needs to be created or activated.

Any variable that is in both environments but changed, needs to have the appropriate modifications.

The overall algorithm isn't complicated. It's a Gap Analysis. The complications come from dealing with the specific elements. If we are talking about replacing a good heart or kidney or liver with a replacement unit, the complications are in the fact that it can be life and death surgery. That's a lot different than replacing a spark plug on a 1957 car; but both processes follow that same general model.

The MGP, by specification, is defined, as a  model, as a nine element grid. There are three rows and three columns as shown below: MGP planning is used to effect and manage complex change. From a single "as is" we can map to 9 conceptual spaces for a series of elements and optimize how to move those elements over time to effect the greatest benefit for the overall system being managed. And we can do that on a rolling, continuous basis for analytical purposes. This is how we manage technology based industries using quality metrics. It provides a framework for well managed, smooth, continuous delivery on product while continuously integrating innovation and other forms of change. In our case we are managing the architectural dynamics of the system being managed.

 

... conceptual architecture, then high level technical approach: Basic knowledge base and algorithm about model based abstraction. Then VSM, MOF, etc.

 

  • 1. It was a problem we had at Bank of America with a bad chip or something on a special model of the IBM S/360 or early S/370. We, as I remember, computed an entire run of savings account interest calculations when the multiply error was discovered. Also, we had jobs where sort routines would lose data while sorting because our files were so large and early tape drives we error prone. There were also software errors, but computer errors are computer errors, whether caused by whatever, it doesn't matter. Mistakes are mistakes.
  • 2. Best can sometimes be "worst" in terms of a specific variable. For example, Critical path scheduling looks for the path with the least slack as the path of focus.
  • 3. Note, there are also ethical questions to be resolved. There can be scenarios which develop where the choice is  less whether to kill, but who to kill. This isn't your everyday driving lesson once you turn the task over to a computer. Computers will, if the situation arises, be forced to make decisions and their programming is how that gets resolved. Testing hobbyist AI cars on the street is not something that has been an issue before, but it soon will be.

Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
SystemsThinking