Knowledge Architect/Baseline/Conclusion

Conclusion

Computing-system complexity is primarily factored by the complexity of what they are designed to manage, as it is most empowering: knowledge, what is increasingly known of reality.

More specifically, computing system complexity is primarily factored by, on one hand, our understanding of reality (e.g. knowledge), as well as, on the other hand, our lack of understanding of our knowledge of reality.

There are other factors, including distribution of systems, applications, contents, and users, as well as scalability, for example, but none more than the problem domain and our grasp of the issues at stakes.

Through integrated knowledge paradigms, abstraction, and generalization, applications can introduce new levels of sophistication, depth, and complexity management, while only marginally increasing the related required computing-system complexity.

In fact, any hope of truly effectively managing the complexity of reality and knowledge, the clear justification for developing ever more complex computing-systems, as well as of effectively understanding what we need to teach computing-system to do, namely better manage knowledge resources, requires the scientific understanding of the natural knowledge phenomenon, of its natural architecture, as well as of its natural and underlying structuring principles.

With scientifically grounded knowledge architecture foundations, with effective representations (e.g. notation, modeling, data formats, structures, documents) and standards, as well as with structured maintainable, flexible, modular, and extensible applications and interfaces, to learn, work, control, track, research, model, play, collaborate and share, resource entitlement, modeling, management, and sharing (REMMS) frameworks are key to mastering the complexity of knowledge, reality, and computing-systems.