Topologies of Generative AI
Thoughts on the similarities between Team Topologies and making GDPR compliant AI tools.
Recently, I had the privilege of attending a session with Michael Spiteri, a CTO and strategic advisor who has immersed himself in the tools made available by the generative AI boom. Despite the rapid pace at which this space is evolving, making it challenging to keep up, Michael emphasizes that the fundamental principles of AI remain unchanged. At their core, these tools are still based on models, albeit highly sophisticated and extensively trained ones, relying on statistics, probability, and a tremendous amount of computational power.
During the session, one of Michael's key insights revolved around his approach to building and training a GDPR-compliant AI. He presented a slide that divided his solution into three distinct layers of models: Assistants, Researchers, and Librarians. Assistants are the large language models (LLMs) that interact with users and coordinate with Researchers to answer questions and complete tasks without "hallucinating." Researchers, on the other hand, are subject matter experts tasked by Assistants to provide answers and complete tasks within their specific domains. Finally, Librarians are responsible for providing Researchers with fresh, curated data relevant to the topics they are working on.
As Michael explained this structure, I couldn't help but notice a striking similarity between these three categories and the team types described in the book "Team Topologies" by Matthew Skelton and Manuel Pais. Published in 2019, "Team Topologies" has quickly become a must-read for business leaders and has sparked a movement among tech leaders. The book outlines three distinct team types: Stream-aligned, Enabling, and Complicated Subsystem.
Stream-aligned Teams are the "delivery" teams optimized for a fast, focused workflow. They are responsible for delivering value to customers and are aligned with a specific product or service stream. These teams are cross-functional and have all the necessary skills to deliver end-to-end value. In the context of Michael's AI solution, the Assistants can be seen as analogous to Stream-aligned Teams. They are the primary interface with users and are tasked with delivering value by answering questions and completing tasks.
Enabling Teams, as described in "Team Topologies," are experts who provide specialized knowledge and capabilities to Stream-aligned Teams. They help Stream-aligned Teams acquire new skills, adopt new technologies, and solve complex problems. In Michael's AI solution, the Researchers serve a similar purpose. They are subject matter experts who provide the knowledge and expertise needed by the Assistants to answer questions and complete tasks accurately.
Lastly, Complicated Subsystem Teams deal with the complex, detailed parts of the system. Their role is to encapsulate that complexity and make it easy for other teams to consume. These teams are responsible for building and maintaining the critical, complex parts of the system that require deep expertise. In Michael's AI solution, the Librarians can be seen as fulfilling a similar role. They are responsible for curating and maintaining the data that the Researchers rely on to provide accurate and up-to-date information.
While acknowledging the potential presence of cognitive biases, such as framing bias, in drawing this comparison, I couldn't help but be struck by the apparent similarities between the three layers of Michael's AI solution and the team types described in "Team Topologies." Both models recognize the importance of specialization, collaboration, and the need to manage complexity in order to deliver value effectively.
In Michael's AI solution, the Assistants (Stream-aligned Teams) rely on the expertise of the Researchers (Enabling Teams) to provide accurate and relevant information to users. The Researchers, in turn, rely on the Librarians (Complicated Subsystem Teams) to provide them with the curated data they need to do their jobs effectively. This interdependence and collaboration between the different layers of the AI solution mirror the way in which the different team types in "Team Topologies" work together to deliver value to customers.
Moreover, both models emphasize the importance of encapsulating complexity. In "Team Topologies," Complicated Subsystem Teams are responsible for building and maintaining the complex parts of the system, making it easier for other teams to consume. Similarly, in Michael's AI solution, the Librarians are responsible for curating and maintaining the complex data sets that the Researchers rely on, making it easier for them to provide accurate and relevant information to the Assistants.
The parallels between Michael's AI solution and the team types described in "Team Topologies" suggest that the principles of effective team organization and collaboration are applicable not just to human teams, but also to the design and implementation of complex AI systems. By recognizing the importance of specialization, collaboration, and the need to manage complexity, we can build AI systems that are more effective, efficient, and reliable.
However, it's important to note that while the comparison between Michael's AI solution and the team types described in "Team Topologies" is intriguing, there are also significant differences between human teams and AI systems. Human teams are composed of individuals with diverse backgrounds, personalities, and motivations, which can introduce additional challenges and complexities that may not be present in AI systems. Additionally, human teams are capable of adapting and learning in ways that AI systems, even with their impressive capabilities, may struggle to replicate.
Despite these differences, the insights gained from drawing parallels between the two models can still be valuable. By understanding the principles that underlie effective team organization and collaboration, we can design AI systems that are more robust, adaptable, and capable of delivering value to users. At the same time, by recognizing the unique challenges and opportunities presented by AI systems, we can continue to refine and improve our approaches to building and deploying these technologies.
🤔 In conclusion, attending Michael Spiteri's CTO session provided a fascinating glimpse into the world of generative AI and the approaches being taken to build and train these powerful tools. By dividing his solution into three distinct layers of models - Assistants, Researchers, and Librarians - Michael has created a structure that bears a striking resemblance to the team types described in "Team Topologies." While acknowledging the potential for cognitive biases in drawing this comparison, the similarities between the two models suggest that the principles of effective team organization and collaboration are applicable not just to human teams, but also to the design and implementation of complex AI systems. By recognizing these parallels and leveraging the insights gained from both domains, we can continue to push the boundaries of what is possible with AI while ensuring that these technologies are developed and deployed in a responsible and effective manner.