by Joe Van Steen, January 2017
A systems thinking based, augmented intelligence system for use in planning and managing the evolution of futures-based system architectures.
A modeling system for the management of systems architecture.
An architecture for an intelligence management assistance.
I am not insane, and yes I am serious. I say this, because I know what some of this sounds like. You can't possibly ... But, yes, I think that what I am talking about can be done, if carried to the extreme as a new form of intelligence engine or knowledge engine, a new tool, if people wanted to do it.
There are philosophical debates to be held about AI, and hypothetical battles between AI systems and humans. That is not this discussion. This is more about shared human intelligence empowerment for humans, using "artificial brains" in the form of personal technology devices as problem solving tools. It's about trying to find the eye of storm in the information overload arena.
I believe that the current state of affairs across the globe is reaching an "overload" crisis point on more, and more issues; and with increasing frequency. And, so far, we seem to be at a loss in terms of management of our lives, as individuals, and as groups. Part of the problem is our own success and sophistication in architecting real world systems out of concepts, and then using them to optimize our own lives. The systems we create become more complex to manage as they multiply and we cascade success on success. The more systems there are, the more relationships exists, and the variables associated with relational management grow geometrically, to a point of overload. Part of the problem is the "age" in which we live. The age of automation. Computers are part of the problem. Lots of things are part of the problem. And computers enable lots of things; and that potential for compound problem is growing geometrically also, to the point of overload. Computers are wonderful. They enable everything. That's good, -- and really, really bad if this "new" technology isn't "managed" as it continues to accelerate all of the other systems we use to jointly live our lives. Compound, cascaded overloads occur.
I submit, that in multiple dimensions (economics, governance, social relations, etc.), a significant part of the core problem is an inability to comprehend and manage architectural scale issues in the overloaded problem space. One aspect of which, is the inability to handle change as a routine element of ongoing functional existence. If you can't handle something because it is too big to begin with (you're already on overload), how can you possibly handle a dynamic version of it. "Right size" changes with time, statically sized solutions no longer work, etc.
As humans, we differ from other animals in our ability to have abstract thought1 and reasoning. With that ability we set out to conquer the world. Which we did. And all of this stuff around us is what we've created.
Well, all of this stuff around us, including us, can be defined as an assembly of overlapping systems of various forms, types and shapes; which we are modifying and regenerating at an increasing pace. Most of it is reasonably managed, a lot of it is not. And unregulated systems, especially new systems, are chaotic. Today's managers are reasonably good at managing to expectations based on orderly dynamics. But variability is problematic and wild, unexpected turbulence is a major cause for concern. It destroys otherwise functional systems. Dealing with narrow view systems, verticals, is especially problematic because they fail to account for significant interrelationships that happen between systems as parts of larger systems. The interactions become unregulated, or they become regulated in favor of one system over another on the basis of power, rather than on the basis of the natural order of the environment. Clusters of such unregulated systems where each system is able to independently decide and implement its actions without regard for others in its context, because it or the significant interaction isn't being managed, can jeopardize the viability of the larger ecosystem2. These, effectively are cancers, systems destroying their host systems from within.
Chaotic systems need regulation. Most of the lack of regulation is because of the overwhelming complexity of the systems we have created, and will continue to create, and our inability to configure appropriate architectures to engineer the regulations. Some is because the problematic regulatory architecture hasn't been developed, or hasn't been "right-sized" and it causes "more pain than gain" so delight suffers. Some, to be realistic, can be caused by people who find it personally advantageous to work counter to the common good3.
Another aspect of comprehension overload is a lack of proximity to nature and the natural order. Nature ebbs and flows. Nature exhibits polarities. Diversity fosters vitality through continual reintegration. Nature does not dictate from a central authority. In the beginning, humans were hunters and gatherers, and evolved to become farmers, ranchers and manufacturers. These occupations, to support an evolving society through commerce and exchange remove us from nature. And cities tend to do it even more, as centers of commerce and exchange. We've become very good at commerce and exchange but very far removed from nature to the point of potentially threatening our viability as a species.
I hope the solution being offered can help with some of this. For others it would be hoped that the architecture of the solution would prevent the AI facility's core utility from being unknowingly4 biased counter to the common good. Meaning, my premise is that a system in which people can trust is the foundation acquiescence to the social group. A system with fair rules and a fair chance for a fair treatment. Ideally, the open architecture can be trusted to be unbiased, and provide a "fair hearing" on the significance of stated user belief systems in any conclusions presented.
Fundamentally, what EATS has become in the years that I've been working on it is a scheme for harnessing a workable understanding of now, and a way of rationalizing about tomorrow.
Conceptualized systems of ideas are how people make sense of things, and how we deal with "other," everything outside of ourselves. All of these "systems" have architecture.
- The architecture defines the system's functionality, and that tells us whether the system is of value and serves a purpose that we may be interested in.
- The architecture defines the system's durability, and that is the basis for making decisions about engineering qualities that are required to make the system useful and achieve its functionality. Some systems only have to survive for a very short time in order to achieve their function, others must last indefinitely (as close to infinitely as is possible). Some concepts for systems are discarded in terms of implementation because they cannot, will not, or can no longer exceed minimal architectural engineering support requirements.
- The system's architecture (how form and function have been matched to satisfy requirements) exhibits some aspect of delight for the parties involved in its creation and/or the parties impacted by its existence. This is where "systems" succeed or fail. The functionality and durability come together and join the two polarities of creation and use, and the polarities of outer (contextual) requirements and internal requirements, and generate some amount of "delight" back to the creators and users. The most "successful" architectures are those which rate high in user delight in all four of those quadrants over the lifespan requirements of the associated system.
- Good utility with the contextual environment
- Good utility for the internal subsystems
- Delightful for the creators
- Delightful for the utilizers, which may include the creators (which is a "power function" when it happens)
So "architecture" appears to be a good way to measure and guide a system's managed evolution over time. What utility is good now? What utility will be good over the lifespan of the system? Does the utility requirement vary over time? How? If there are critically urgent functional requirements, what utility is required to support those system needs? Given answers to these type questions, an architect can attempt to engineer a solution, a pathway to system creation or survival, within constraints that ideally achieve "delight" for impacted parties.
Systems Thinking is an attitude of dealing with the real world and its various issues, ailments and opportunities in the framing of "systems." People have been making use of "systems" (conceptualizations) since the dawn of man, since the first communicated "ideas" about cause and effect, and relationships among things. And they've been using architecture since they learned to carry fire further than they could carry a burning stick. It is part of what separates us from the other animals on the planet. Systems thinking is newer as a formally understood body of knowledge. From Wikipedia:
Systems thinking has roots in a diverse range of sources from Jan Smuts holism in the 1920s, to the general systems theory that was advanced by Ludwig von Bertalanffy in the 1940s and cybernetics advanced by Ross Ashby in the 1950s. The field was further developed by Jay Forrester and members of the Society for Organizational Learning at MIT, which culminated in the popular book The Fifth Discipline by Peter Senge4, which defined systems thinking as the capstone for true organizational learning. Derek Cabrera's self-published book Systems Thinking Made Simple claimed that systems thinking itself is the emergent property of complex adaptive system behavior that results from four simple rules of thought.
Systems thinking applies the concept of systems to the process of conceptualizing systems; resulting in systems of systems, of systems. And once you do that, the result is exponential expansion. It's actually been happening in our heads from some early version of man. It defines the history of civilization, and the history of technology. Maybe it's part of the difference between Sapiens and our cousins of genus Homo. That might explain a lot in terms of anthropology, and open up new questions. But the formalism of systems thinking as a discipline is a new thing, partially coming out of cybernetics, as described above.
Computers and computer algorithms are part of what sent us around the bend, past the inflection point, on some of this. I think computers and information processing can also play a significant role in resolving the problems the new technology has inflamed, as well as helping in resolving problems related to the regulation of other systems in need of management. The proposal involves two parts:
- An automated environment
- Procedures and activities that need to be accomplished by human beings
The initial architecture of the automated environment as defined here assumed to be a starting point for an ongoing initiative that can potentially go on forever, similar to the Wikipedia effort or a number of other collaborative projects. The procedures and activities define a variety of viewpoints and use cases describing both the use and generational maintenance of the system. In a Wikipedia framework there are procedures for contributors, editors and readers. For EATS these functions need to be accomplished for conceptual models in addition to encyclopedic content potentially including any Internet shared, or otherwise definitive resource. This can offers some challenges, but only if the proposal is successful.
The solution involves augmented intelligence, AI. It involves getting people to trust a computer system to give them an "honest" answer to design questions about the state of the world. So that they can come to grips with the critical human issues around beliefs and how to plan together for the common good. And it involves getting a computer system competent to the point of being able to offer advice with varying degrees of specified certainty or plausibility of what is or what might be on subjects of consequence.
Ultimately, managing the world in both macro and micro levels is, and always will be, a human problem. Technology is just technology, no matter how smart it may appear. Computers are just chemical elements wired to run instructions sets that they have no true concept of why they are doing what they are doing. Like rocks rolling down a hill. Artificial intelligence is not artificial life. It's a tool. An enabler. The concept of society as a "system" is a human endeavor, and must be managed and operated as a human enterprise. Without humans, the machines will run down, and stand idle. Like stopped clocks. Citizens are the engine of the system. Citizen participation and involvement is "the system." The proposed AI IT is enabler. The idea that this may cause continued drift to "ethical questions5" is a different, but valid, concern which should cause us to determine how to manage that, if fully necessary, AFTER we find a way to get through some manner of triage.
Augmented intelligence, which is the proposal is the use of computer "artificial" intelligence as an aid to human intelligence for computational problem solving. A very, very sophisticated calculator, but a calculator. Which can rank, rate, and provide statistics about things; but it ultimately can't make a decision because it doesn't "understand" how the answer matters. All it can do is provide a set of rationalizations.
The rationalizations still need to be evaluated and any decisions made by people. In a sense, the proposed system is a calculator and tool kit for thinking about abstract problems. And, what the calculator does, is facilitate collaboration on shared problem solving, and allow synchronized thinking around shared concepts.
The process anticipated for the proposal includes a gradual evolution of the product concept from contributions of a open group of individuals subscribing to its virtues, some through invitation, others by word of mouth. Initial activities are on a voluntary basis. If the proposal proves successful or gathers sufficient interest, a more permanent social support system would need to be structured for a sustainable endeavor. (For example, Wikipedia has a foundation.)
A fully successful endeavor would include, on Tom Grave's adaption of the classic Forming/Purpose, Storming/People, Norming/Preparation, Performing/Process, Adjourning/Performance fractal cycle model:
- An effort to assemble a group of people competent in the applied disciplines involved in the proposal to review the proposal to determine if there are glaring errors, or if the work is redundant. Completion of this is subject to completion of the effort to create a technical specification capable of review as a serious work of engineering. This is where we are today (1/2017).
- A effort by an initial group of individuals to assemble a working model and documentation for a prototype, demonstration system. This is fundamentally what I have been working on by myself for the past years, based on the work of the previous 35 years. This is a functional version of EATS supporting at least one useful use case application. Currently (1/2017) this is viewed to be a Design Discourse Support System along the lines specified by Thorbjoern Mann. Such a system could be used as a collaborative discussion support system for intelligent issue resolution on a variety of scale and in a variety of applications. Such a system can be built entirely without AI, simply as an architected discussion management system. These systems do require the active engagement of human moderators as administrators. Administrators are required for guidance in using system facilities, as methodology facilitators, and for enforcement of community standards, if necessary.
- Once a base system has been constructed in an initial form and demonstrated successful, an effort should be engaged for the compilation of expanded content and algorithms to satisfy identified needs on a priority basis. This is defined as implementation of the multi-generation plan. And, given a knowledgebase of sufficient scope to have utility, the AI facility can then become available as a free utility to the general public6.
- Once a base system proves successful, expansion in the multi-generational plan moves forward along multiple axes simultaneously. Development resources are consumed in the creation process as the models need to be built in more subject areas, and as the push is made into AI, forecasting and navigation features. There is also a need for administration and guidance in the use of expanded features, hypothetically with an expanded audience. A significant ongoing requirement for a successful product will be how to maintain trust in a facility which is essentially an NGO as it scales up.
- It may appear overly egotistical or grandiose to talk about some of these issues and projections in such sweeping scale, but core to the ideas central to EATS are universal fundamentals, as the basis for shared understanding. That concept, and a global Internet, tend to take things to grand scales automatically these days.
Long Term Success
Long term success could involve:
- The need for a group of global citizens, who will develop and perpetuate the maintenance of an AI facility designed to assist modelers, and all other interested citizens, with the management of issues related to the architectural scale of the current real world. This is a task that could go on forever feeding innovation. Care and feeding of open source software and metamodel refinement and expansion. This group of people are subject to the same trust requirement as the computer codes. Effectively these people are the new priests. This work needs to be accomplished as open book, disclosed and open for review activities. Key to the long-term success is following the model currently employed in the scientific, educational and professional environments in terms of peer reviewed and refereed work as trusted citations. But some form of trusted, refereed public opinion as to concerns needs to also be included for desired levels of trust to be achieved.
- Integration into the fabric of the global communications environment. A global facility, with redundant core knowledgebases, and hack proof audit logs could become a cornerstone of trust for building a collaborative future at a variety of levels. The system design should prove impervious to scale with the exception of propagation delays for physically remote connections. At the same time it should be capable of infinite personalization in terms of view point adjustment such that every user citizen can maintain a unique and personal view of the world, which might be managed from their personal automated devices.
Conceptual System Description
The core problem we are dealing with, as it relates to this proposal, is a hyper-example of Ashby's Law, the Law of Requisite Variety. Edited from Wikipedia:
If a system is to be stable, the number of states of its control mechanism must be greater than or equal to the number of states in the system being controlled. Ashby states the Law as "variety can destroy variety". He sees this as aiding the study of problems in biology and a "wealth of possible applications" . The Requisite Variety condition can be seen as a simple statement of a necessary dynamic equilibrium condition in information theory terms c.f. Newton's third law, Le Chatelier's principle.
[ A system has good Control if and only if the dependent variables remain the same even when the independent variables or the State Function have changed. In a real system this implies that the State Function is a composition of two functions, such that the second is the inverse of ( the possible changes of ) the first:
y = F(G(x)) where
F = controller system's function of state
G = controlled system's function of state
x = inputs, OR, independent variables
y = outputs, OR, dependent variables.]
In 1970, Conant working with Ashby produced the good regulator theorem which required autonomous systems to acquire an internal model of their environment to persist and achieve stability (e.g. Nyquist stability criterion) or dynamic equilibrium.
Stafford Beer used this to allocate the management resources necessary to maintain process viability.
The problem, according to Ashby, is too many states (variables), which is a problem in the attempt to regulate many systems. And we have it at exponentially growing levels. On a growing number of issues.
The proposed solution is based first on modeling and model management, and then on augmented intelligence. Using computers and software we can build models of almost any describable system, fact, fiction or fantasy. And, we know how to build models of models, and models of models of models. This is a form of inverse power rule that can be used to manage Ashby's Law. Metamodels are good regulatory frameworks. Cascaded metamodels can be used to synthesize key decision variables for complex sets of relationships in conflicting systems.
The more significant AI portions of the system are based on applying knowledge engineering and simulation capabilities to the fundamental information structuring models. This includes artificial intelligence reasoning capabilities, in conjunction with human reasoning, where the computer acts in a supporting role as an advanced mathematical calculator, to figure things out and provide the humans with better decision making capabilities.
Fundamentally, you can't control a human system with a computer system because of Ashby's Law. But you can control a computer system with a human system. And, there is nothing to stop the human system from building a model of itself that can be manipulated in a computer to evaluate concepts, and then self-modifying toward methods of operating which would seem to promise a more rewarding existence. We do this all the time on focused issues. It involves scenario analysis and planning. Strategies and tactics. The question is engineering. Is it technically feasible, for what cost, with what risks?
Our fundamental conceptual framework is to model a federation7 of human systems and how they interact with one another, where any individual can belong to multiple federations in an environment where all federations combine to form a unity.
This creates a system (whole and parts) of focus framing that can be used to model individuals, or all of society, or anything in between. And it provides a three tiered segmentation for development of points of view:
- The point of view of an individual in the federation.
- The point of view of some federation of individuals with a particular viewpoint or set of concerns
- The point of view and concerns of the whole of the unity, at whatever level
In the modeling system "parts" and "whole" can slide up and down a scale. Federations fundamentally define interrelationships of parts within the context of whole.
Models are how we precisely define concepts. Concepts are things which we hold in our heads. Humans "understand each other" based on aligning mental models with other humans. We perceive the universe as a set of physical stimulatiions. We understand it by creating mental models. We understand each other by exchanging information about those mental models, and coming to some reconciliation. We don't always agree, but we can come to point of understanding. We understand each other because we share a common model of some agreed scope concerning some agreed concept. Why the sky is blue. Why planes can fly.
Simple models are easy to appreciate. Complex models not so much, you need to be a specialist. Wicked problems have wicked models, and some problems defy understanding. Models also have issues concerning believability. But models are also how we bring order to chaos.
The proposed solution is a system based on augmented intelligence. In most of life, there are no correct answers; and the answer is often context dependent.
The proposed AI (augmented intelligence) facility is designed to assist modelers with the management of issues related to the architectural scale of the current real world. What that means is a tool kit that can be used by modelers to determine, from their own perspective, how decisions which they need to make in the real world are likely to affect them. By reducing the number of variables required to be understood by each citizen to a set of variables that they feel comfortable using to make their decision regarding how to interact with the world around them, the citizen should feel less overwhelmed. Ideally, those who use the system will have better outcomes. Outcomes with which they are more pleased.
For black and white problems, we can build Artificial Intelligence solutions, the other AI. A computer can figure out how to win a game with consistent rules and fair odds. If there is a definite "correct" as opposed to a definite "wrong" answer, a computer can figure out how to achieve a "correct" answer faster than a human being. The main issue as we delve further into that area, I think, is a matter of depth and sophistication of AI analytics strategies based on the sophistication of the subject areas being pursued8. But these are brute force solutions a lot of times, which are then adjusted to use heuristic algorithms and patterns. We have robots used in manufacturing to driverless cars. Big data is largely brute force.
Augmented intelligence, at least as I use it, applies to a class of problems where largely there are no "correct" answers. Its the solution for a class of problems where humans are overwhelmed by the size and scope of the problem, but computers can only calculate likelihood based on a limited parameter set. These are problems where we have "modelers" in senior capacities in business, government and various private and public institutions who evaluate various systems affecting the lives of citizens. And formulating public policy based on their findings. What is happening today with the modelers and their computer models is augmented intelligence.
Policy makers use their best intelligence to create computer models of how they expect the systems in their domains of expertise to work. The feed the models with historical information and current state data. And the various models, under differing assumptions, come up with different answers. And the policy makers still have to decide what to do, but they, hypothetically, are able to make better decisions based on the input from their computer models. Their original intelligence has been augmented, but one or more human policy makers still has to make decisions.
Understand here, that this is not magic. Detailed algorithms must be implemented and a knowledgebase loaded, but for a lot of subjects this is a logistical issue. Simple cases involve things like record keeping, retirement planners, personal information managers, etc. It works up from there.
- Communications is via IP; every unit can be an island, or connected; basic topology is a distributed graph, a grid
- Operating environment follows the EATS technology stack which insulates the "application" from the mechanics of data processing. The stack operates in an Eclipse tailored JVM.
- Content management for basic features requires SQL for content indexing.
- The application space is defined as a grid of techniques, concepts and domains. The EATS framework provides the infrastructure to navigate the facets of the application space.
- The application space is managed as a globally shared resource with capabilities for private extensions.
- Concepts are identified as Elements, and given globally unique identifiers (GUIDs).
- The EATS infrastructure provides an architectural scheme for encoding concepts as models describing systems at various levels of composition and decomposition and in various forms.
- Central to the concept of the proposed system is the principle of timeless dependability, which reinforces the building of trust. EATS models are a scheme to evaluate concepts involved in design and decision making. Concepts are not project specific. Old concepts which are still applicable and have not been replaced or proven false may be considered of greater reliance and dependability than untried new concepts when measuring dependability. Conversely, new concepts tend to be better sources of innovation.
The EATS solution to the defined problem of variable overload is to provide a modeling environment where a model of any system can be reviewed, adjusted, or created and shared using standardized or customized beliefs. Models build within the environment can then be evaluated for plausibility and significance regarding any issues raised. And these evaluations can be measured in terms of corroboration with known versions of reality. Objectively this should provide an enhanced tool for planning and decision making, and a basis for cooperative problem solving.
Part of the solution is to deal with the world through an architectural viewpoint which provides an understandable, unbiased, overall structure to the solution.
- The real world is described as a set of systems from a set of viewpoints
- Part of the real world is that it exists within the real world, as we know it. By default the AI facility will enforce solutions to fit within real world constraints.
- Part of the real world is that "stuff happens" and mistakes exist, so while the AI facility will not support the creation of "unreal" conditions, it must tolerate them whenever possible and alert to the use of questionable information. Often, what appears "unreal" to the AI facility is better characterized as "unknown." It is then up to the human user of the AI facility to integrate this into their extrapolations of significance in their use of the system. And, potentially, identify the issue as a cause and need for further development of the AI facility in the related aspect of awareness and knowledge.
- Systems have fundamental concerns on the part of stakeholders that are achieved as a balance of various factors focused on utility, durability and overall satisfaction among other points of view.
Part of the solution is to employ a multidimensional, model-based engineering approach to the management of the architecture models.
- Part of the solution is to employ total quality engineering for effectiveness measurement and feedback control systems relative to managed architectural solutions.
- Part of the solution is to employ system dynamics modeling as part of alternatives analysis
Part of the solution is to utilize Stafford Beer's Viable System Model (VSM) in the regulator function and in evaluation of decisions involving forward thinking. The nature of the VSM control model acts as a reducer of variability between operational and managerial components. It also supports recursion as contextual levels are ascended. This allows current operational situations to be decided on the basis of optimal alignment with higher level strategic goals, vision and principles. The problem is maintaining an ability to keep a multi-tiered conceptualization of a complex VSM in your head, on paper, or in a computer algorithm, and being able to maintain it as scenarios change. The management of the model becomes more complex than the management of the original problem. To resolve that problem the AI facility is to use an OMG MOF facility to manage the metadata of the cascaded VSM such that a four layer metamodel can be used to handle an infinitely cascaded hierarchy of systems.
Part of the solution is to employ international standards wherever applicable to ease of integration with other systems by users globally.
Part of the solution is to ensure that 100% of the code and documentation for the system is to be open source for complete transparency.
Part of the solution is to ensure that 100% of the shared content can be measured as to plausibility.
Part of the solution is to provide a finished solution that can be used as an AI assistent in its vanilla form, and/or it can be extended and tailored in terms of both capabilities and personal beliefs as they affect interpretations of, attitudes toward, or dealings with the real world.
- 1. Live Science
- 2. Ecosystem: the larger environmental, viable context, which includes self.
- 3. These tend to be, I believe, a very small group; but necessary to guard against. They are not as easy to isolate and identify as some people would like, but high quality forensic methods, and assurance of getting caught, is also a good prophylactic.
- 4. The point being made here is that a core issue with the use of the AI facility is the AI facility's trustworthiness regarding the system results. There is a concern about AI that people have. Can it get to the point, where it works for its own benefit, rather than for human benefit. To alleviate that concern, the AI facility needs to be able to demonstrate an awareness of benefit, and then demonstrate how that benefit is understood to occur, and how it might be appreciated, i.e., how delight occurs. System delight is targeted to all concerned parties and accounted for in a transparent manner. Delight will be measured, and in some cases, based on specification, delight will be biased toward specific targets, because that is the purpose and intent of the system. But, for trust that a "fair share" apportionment is occurring, these need to be "known" biases. For example critical variables need to managed within control parameters. This is a "bias" toward being functional. This leads to a requirement for a specification of "open" specifications and operating principles.
- 5. The concern is that more responsibility will be authorized to be assigned to automated algorithms than is appropriate for the situation. Trusting AI to drive a car is one issue. Trusting AI to intentionally take a human life is quite different.
- 6. Conceptually you can think of this as a form of Wikipedia launch where Wikipedia is released in segments by subject matter.
- 7. A federation concept provides the scheme for mapping to treatments of elements as systems. EATS Elements can be systems, and the can be components of larger systems. This is known as the Composite Pattern. Effectively, it's a Bill of Materials concept. It is how we view the physical world.
- 8. For example, medical diagnosis.