In 15 years, service-oriented architecture (SOA) has gone from a buzzword to an established technology. But new patterns, frameworks, and standards continue to emerge in the SOA space. In this recently developed course, we will focus on design decisions and tradeoffs that SOA architects face today. Topics that will be covered include REST constraints and good practices; SOA solutions that include event-driven messaging, API gateways, and orchestration platforms; and microservices vs. monoliths, security, alternatives for integration to external systems, and other design considerations for SOA solutions.
This course is targeted at architects designing software-intensive systems with a goal of adopting DevOps practices to enable continuous delivery of high quality and secure software. If you know nothing about DevOps, don’t fear. The course begins with a brief overview of DevOps and key concepts. If you attended last year, consider coming again since roughly half of the material is new. In addition to examples of designing for deployability from real projects, we have added deep dives and case studies that focus on variability options and the pros and cons of popular DevOps architectural patterns such as microservices, feature toggling, canary testing, and image baking. We also explore more deeply how to integrate static analysis tools into the deployment pipeline to minimize architectural drift and provide tips for how to get the best value from them. Practical takeaways include a template for specifying measurable deployability requirements and a handout with more than 20 architectural tactics successfully used on DevOps projects. To keep things interesting, we also include a facilitated discussion session on the role of the architect and Infrastructure as Code.
Technical debt occurs when a design or construction approach is taken that’s expedient in the short term but that increases complexity and cost in the long term. Whether it results from ignorance, accident, or strategy, all software-reliant systems carry some technical debt. If managed well, some technical debt can accelerate design exploration. Left unrecognized and unmanaged, accumulated technical debt results in increased development and sustainment costs. This course is designed for professionals who develop and maintain software-reliant systems to gain a better understanding of
This one-day course emphasizes the importance of intentional and strategic management of technical debt that is supported by architecture-focused practices.
The right architecture can make or break a project, or an entire company. Participants will learn what architects do, why it is important, and some tips on how to talk about architecture with stakeholders. We'll start with some definitions and case study examples of how architecture supports business goals. We'll dig into how an architecture's separation of concerns helps us deal with complexity. We'll look at some rules of thumb for creating good architectures and how to incorporate architecture into agile projects, and finish up with a map of architecture-centric processes.
There are many systems that we know how to architect (usually because we've built them many times before). There also many systems for which we know a process that will lead us to a reasonable architecture (usually because the forces on our project permit incremental and iterative development). There are even some things we know how not to architect (because we've tried before). However, there are some systems for which we hardly know where to begin (because not only are they wickedly hard, they are also far beyond our current art and science). These are the classes of systems that most interest me: how do we architect the unknown? In this presentation, we'll start by laying a foundation of what we know we know about software architecture, and then we'll consider what we know we don't know. Following that, we'll take a leap into the unknown and look at the kinds of systems that will stretch us both technically, socially, and ethically.
The right architecture can make or break a project, or an entire company. Participants will learn what architects do and why it is important and some tips on how to talk about architecture with stakeholders. We'll start with some definitions and case study examples of how architecture supports business goals. Then we'll dig into how an architecture's separation of concerns helps us deal with complexity. Next, we'll look at some rules of thumb for creating good architectures and how to incorporate architecture into agile projects. Finally, we'll finish up with a map of architecture-centric processes.
The Architecture Practices Initiative of the SEI has developed a family of domain-specific languages for system specification and verification and validation (V&V) to work in concert with the Architecture Analysis & Design Language (AADL). These languages provide support for exploring architecture’s influence in all phases of the traditional software lifecycle. Additional tools—including AGREE, Resolute, and the plugins of the Open Source AADL Tool Environment (OSATE)—supplement the domain-specific languages to provide both static and dynamic V&V capabilities. These tools establish feedback loops between requirements and architecture and between architecture and implementation, all of which are unified with V&V activities.
In this session, attendees will see a set of pedagogical artifacts that illustrates the use of architecture information to support multiple graduate courses. All of the artifacts have been successfully used. With the ReqSpec language, requirements are explicitly linked to the development stakeholders and initial architecture information. The requirements information is linked to verification assets using the Verify language and to certification assets using the Assure language for assurance cases. A brief set of exercises will illustrate some uses of the artifacts both in instruction and development.
Statoil is a large upstream oil and gas company with many physical installations both onshore and offshore. All of these installations, and the logistics operations that support them, use an increasing number of interconnected devices with varying degrees of capability and smartness to improve their operation. As an organization, we have many years of collected experience with the types of devices that are now being called the Internet of Things (IoT). We would like to share some of the trends, challenges, and opportunities that we see in this area and discuss its importance for the future of our installations and the software that is required to utilize their full potential.
We will present different aspects of what IoT means in the context of a large oil- and gas-producing company and how it affects the way we think about software and software architecture. We have worked with interconnected devices of different kinds for many years and have some thoughts on future challenges in this area that we would like to share with the SATURN community. We will organize the discussion around four main topics:
While we do not promise answers, we think that we can provide a perspective on these challenges from a large industry actor that can serve as a starting point for discussion on the path to some insights into how the IoT affects the way we think about software architecture.
Let’s face it, the system you maintain isn’t meeting expectations. The crystal ball you were issued at engineer academy was broken, and you guessed wrong about how the system would grow. Now, you’re faced with a choice: should you bite the bullet and rewrite, or should you somehow try to salvage what you have? In this session, I will talk about the evolution of the system of applications at Pluralsight as we grew from 4 to 80 developers and from one to six technology stacks over a period of four years.
No doubt microservices are important, but it seems that all the hype around them comes with inflated expectations. Many consultants, authors, and vendors who provide services or products related to SOA have rebranded their material to mention microservices. But what is a microservice from a software-architecture perspective? What do you gain and what do you lose with microservices compared with the monolithic model? (Yes, there are disadvantages!)
In this talk, we’ll try to answer these questions and discuss some other important SOA patterns that can help achieve common SOA quality requirements.
Software architecture is critical for business success. Think about it. Solid architecture prevents defects and system failures. It saves money and gets quality products to the market faster. Most software-reliant systems are required to be modifiable and reliable. They may also need to be secure, interoperable, and portable. How do you know whether your software architecture is suitable or at risk relative to its target system qualities? This SEI boot camp session on Architecture Evaluation covers practical and proven architecture analysis and evaluation techniques that identify risks early in the life cycle, including scenario-driven peer reviews and the Architecture Tradeoff Analysis Method (ATAM), a tested process that has been used in many evaluations over the past 15 years.
Are you building applications that run in the cloud? Are you taking the necessary architecture steps to make them cloud-ready? In this session, I will present “12 Factor Apps: A Scorecard” to help you evaluate your application’s cloud-readiness. The content of this session stems from my hands-on experience at GE working with many teams to migrate legacy applications into our Predix cloud platform.
So you’ve decided to take your app to the cloud. Great! There are common pitfalls I would like to help you avoid. For example, pre-cloud applications may only be able to run on certain well-groomed servers (“pets”), but when deploying to the cloud, your application’s servers will be disposable (“cattle”). How can you properly refactor your application’s architecture to prepare for this new type of deployment environment?
The 12 Factor App is a methodology created by Adam Wiggins (co-founder of Heroku) to provide guidance for cloud application development. Through examples, I will use these 12 Factors to provide a ranking system for you to grade and identify ways to improve your application cloud-readiness. Attendees will be able to see how their applications stack up against the 12 Factors and will gain practical tips for improving their cloud-readiness.
Evolving the architecture of legacy systems for unintended use is difficult. The architectures are not documented well, the team that built the system has often moved on, old and out-of-date code is permanently intertwined, and the technology trends of the present are dramatically different from when the system was first developed. This is the situation our team found itself in while working to design a new cloud version of an existing product.
In this talk, we will share our story about evolving an enterprise search platform to create an isolated, portable crawler. The existing system has been in active development since the early 2000s and was initially designed for use in traditional data centers. Since then, it has evolved into a more pluggable product, able to connect to a variety of data sources. We wanted to recreate this functionality in the cloud, but we had a tight time constraint. We soon realized we might be able to migrate part of the previous code instead.
During this session, you will learn about modularity, using experiments to improve decision making and reduce risks, and how to analyze a legacy system to make well-grounded decisions for future design. We will demonstrate these lessons with examples from our experiences with this project.Ideally, the best application-security solutions would be built with security in mind from the ground up. To do this, you must start with a secure coding platform. Mainstream programming languages such as Java and C++ are inherently flawed with vulnerabilities derived from integer overflow and underflow, math errors from floating point floors and ceilings, and loss of information in type conversions. The languages we use were not designed for developing secure code.
The Secure Coding Framework (SCF) corrects these flaws and prevents developers from silently triggering errors that lead to cyber vulnerabilities. It also adds new features such as built-in range checking and exception handling to data types that enhance secure coding efforts. This presentation covers the development and use of the SCF as a secure coding platform. SCF makes it easy for developers to write secure code in mainstream programming languages. It supports the concept of building in security from the beginning rather than as an afterthought. Kertis will discuss the business drivers, software quality attributes, design and implementation, details of the APIs, and the patent-pending technology behind the product.
A wealth of material covers code review from a code quality standpoint, tracking a host of metrics and generating enough Big Data to employ a small army of analysts at some companies. But introducing code review at the architecture stage seems rarely to be done; it may even be sufficiently rare to qualify as novel. In this presentation, I will focus on some quality attributes valued by a team that conscientiously conducts code reviews, and how code review enables, but does not guarantee, these attributes in the team’s systems architecture: accountability, accuracy, auditability, debugability, efficiency, evolvability, failure transparency, inspectability, learnability, maintainability, manageability, modularity, predictability, repeatability, safety, serviceability, simplicity, standards compliance, testability, traceability, and understandability. I posit, based on my own meandering experience across several projects, both open and proprietary, that these quality attributes are enabled by code review and saved from being poorly timed afterthoughts or patches onto an architecture. To do so, I’ll analogize lawyers, debt collectors, and credit ratings.
For many developers, globalization is an afterthought. The unfortunate reality is that many of us have not considered it. It would have been nice to consider language as an abstraction from the beginning. Most mature frameworks have already considered and implemented tools to handle their wide variety of users, which should prompt us to do the same.
However, it isn’t always simple to flip this switch for the large projects that we work on today. Products need to have a methodology for storing and retrieving strings of parameterized text that does not rely on specific inflections, pluralization, or grammatical structures. We need to provide culturally accurate display formats of data types such as dates, numbers, and currency. Timestamp storage and retrieval needs to be standardized so time zones can display correctly. There are also concerns with languages, such as Hebrew and Arabic, that have bidirectional text that must be accounted for in user-interface elements. Our applications can provide sensible defaults based on regional data, but to deliver a globalized product, control of these abstractions needs to be exposed to the users of our applications. Finally, designing sound testing practices surrounding these abstractions is key to being able to rest peacefully once we have addressed all of these concerns.
During this presentation, I will walk through the experience of converting a Ruby on Rails web application that didn’t account for globalization needs at its inception, and I will use real code to illustrate how to best address these topics of globalization.
The common denominator here is that software developers are expected to keep in mind many abstract yet complex models that constrain the code they write. In some ways, these constraints are a burden, and in other ways, they are the light that illuminates a path forward.
I will discuss an idea called Model-Minded Development that generalizes across DDD, design patterns, architecture, TDD, and coding styles. The defining characteristic of senior software developers is their facility with Model-Minded Development, and it enables them to operate at an advanced level.
Industrial IoT is the next wave of technological revolution that will dramatically transform manufacturing, energy, health care, transportation, and other industrial sectors. This transformation will require new technologies that will connect data centers, industrial control systems, industrial machines, and humans. Connectivity and interoperability of heterogeneous systems are the foundations of Industrial IoT and major prerequisites to realize its full potential. There are too many proprietary industrial protocols and legacy standards that create interoperability and security challenges. These challenges cannot be resolved without a comprehensive approach to security and data privacy. Another big challenge is the amount of data generated by industrial devices and machines. It will be critical to build systems that can automate data collection, cleansing, and aggregation.
In this session, we will discuss GE’s experience for the last few years in building the Industrial Internet platform called Predix. We will focus on software architecture, design, and best practices in building secure and reliable systems to address the most important Industrial IoT challenges and use cases. We will discuss some of the important use cases and our architectural approaches to data acquisition, data processing, analytics, and security.
Writing APIs in a RESTful style is growing in popularity, but not all uses cases are good fits. Several interesting alternative technologies are emerging. This presentation provides overviews of the following alternatives styles:
A single user journey through a complex system can have multiple layers of interaction through the back and front of a web, or software, application. Architects, developers, and business people all need to have a shared understanding of a feature or service. How can we keep all the myriad stories, features, and enhancements in mind when creating code-based events but not get lost in the details?
That’s where flow mapping comes in. Flow mapping is similar to its big sister, story mapping. Both are methods to visualize work items from your Agile product backlog, but flow mapping occurs at a much more granular level. Usually limited to a single action and role through a system, it enhances the decision points and events shown by appending a layer of user stories and user interactions on a flowchart or process diagram of the journey. This flow map gives an overview of what is needed to create a robust process that minimizes risk through identification of high-failure areas, and it links the backlog to the events on the map, providing a clear overview of development tasks without compromising speed and agility.
The ability of applications and services to operate across heterogeneous devices and domains is a major barrier to realizing the vision of the Internet of Things (IoT). A primary challenge is to develop standards and best practices that enable the seamless integration of multimodal data. This integrated data could support new types of applications and services that facilitate more comprehensive understanding, insights, and experiences with the things and people around us.
The goal is to achieve semantic interoperability, that is, to represent and exchange information in a form whose meaning is independent of the application generating or using it. Semantic interoperability accomplishes two important objectives:
Significant new business and innovation opportunities will emerge from multi-domain IoT systems. To realize that promise, IoT systems must be designed to support some level of commonality by defining interoperable data and metadata models, formats, and communication protocols. This talk presents various motivating use cases and introduces several example technologies to help get there. In particular, we will focus on the definition and use of semantic models and protocols for representing, exchanging, and integrating data useful for context awareness, personalization, and decentralized quality assurance for IoT systems, such as personalizing lights in your smart home.
As a software system evolves, its design structure often degrades and accumulates technical debt. The emergence of code smells, such as a God Class, is a well-known symptom of such problems. Although several tools exist for detecting code smells, the number of smells returned by current tools generally exceeds the number of problems developers can deal with. This is particularly evident when a team should focus on customer-visible features, and thus the time available for system restructuring is limited. Furthermore, not all smells require urgent attention, as they might not be related to architectural problems or business goals. In this context, having a tool that can prioritize critical smells is of great help for architects and developers.
To this end, we developed JSpIRIT (Java Smart Identification of Refactoring opportunITies) as a recommender system for ranking code smells according to multiple criteria. JSpIRIT performs a scanning of the system code, but its analysis is flexible enough to include information from past system versions, modifiability scenarios, and architectural components, among other assets. In the past few years, we have applied JSpIRIT to several Java projects with satisfactory results. Consequently, we have continued to improve the tool with more features. For instance, since smells often appear interrelated in the code, JSpIRIT provides insights to the developer about smell groupings. In addition, it offers visualizations for different smell configurations. We will present the key tool features and discuss project experiences in which JSpIRIT was useful for diagnosing the system “health” and planning for refactorings.
In this presentation, I will first describe the Scala programming language and its position in the language space. I will then describe the Apache Spark programming model and its role in the Big Data space. Next, I will discuss the Scala features that make it the first choice for Spark programming, briefly commenting on the Python and Java alternatives. I will also cover some basic programming tools helpful for doing Scala-based Spark development. Finally, I will discuss my experience teaching these technologies in a graduate software engineering course.
During the last several years, prominent speakers and media headliners have promised us a big future for the Internet of Things (IoT). While some predict that growth of internet-connected things will reach $50 billion by 2020, or estimate the total annual economic impact of the IoT at up to $11 trillion by 2025, others believe that by 2020 almost a quarter billion vehicles will be connected to the internet, thus forming key elements for the IoT era.
While all these predictions could become reality, there is plenty of work still to be done, and many prerequisites must be completed to enable this revolutionary growth. Even now, a typical software architect with a common computer software design background could be easily confused by the plethora of terms, concepts, proprietary standards, protocols, and solutions for internet-connected things. Some of these have a more than 40-year history with roots in pre-internet implementations. Needless to say, a modern software engineer who lives in the age of open-source software, Git, powerful integrated development environments, and software-defined everything can be simply overwhelmed.
In this session, we will share our vision on the current state of the standardization process for the IoT and discuss several reference architectures with mapping to modern IoT protocols, platforms, middleware, and cloud-based offerings. We will also present real-world case studies that cover in more detail some architecture concerns such as maintainability, security, power efficiency, availability, and autonomy.
Software is everywhere, and the ubiquity of software raises the stakes in ways many software developers may not realize. Poorly architected software-intensive systems can result in end-user frustration, economic loss, and even the loss of human life. What’s more, modern software is developed under a variety of conditions and environments, some of which are unfair, unsafe, or otherwise hostile. While free lattes are not an inalienable human right, programmers should probably have the right to receive equal pay for equal work.
Beyond a few universal human rights, determining whether a given architectural design decision is ethical is not always a simple yes or no. For example, consider the case of an autonomous drone that finds itself in a situation where any decision it might make results in the loss of human life. A programmer today, right now, might be writing code that will make that decision.
As architects, I propose that we have a responsibility to define the ethical framework within which downstream designers will operate, in the same way that we define other quality attributes.
Most nontrivial software systems suffer from significant levels of technical and architectural debt. This leads to exponentially increasing cost of change, which is not sustainable for a longer period of time. The single best thing you can do to counter this problem is to give some love to your architecture by carefully managing and controlling the dependencies among the different elements and components of a software system. This session will first look at the major reasons why so many systems end up in an unmaintainable state and then show how to address the problem by using automated quality gates in combination with a domain-specific language that can be used to enforce an architectural blueprint over the lifetime of a software system.
The essence of design is structure: What parts comprise the whole and how are they related? In the field of software, we have ways to structure implementation—with functions and datatypes, design patterns, architectures, and so on—but we lack a way to structure behavior. Witness the way we sometimes talk of having “thousands of requirements,” although a requirement is usually little more than a transition in a state machine.
To make software that is more usable and more robust, we need a way to structure behavior. Just as architects design the structure of a building in terms of light and space and flow, leaving to engineers the task of designing the physical structures that will support their visions, so we need software architects who can shape software independently of its realization.
In this talk, I'll present the elements of a new theory of software design that provides a structuring principle for behavior, criteria for identifying good and bad structures, and patterns to emulate. I'll also report on our experience applying the theory on a variety of systems.
Since 2007 Siemens has released several learning and certification programs for software, system, and test architects as part of the Software Initiative Curriculum. Due to their positive influence in their own organizations, certified architects have gained high visibility and more appreciation over the years. A key element of the Siemens curriculum is active management of the network of certified architects across an organization. Major elements of this approach include dedicated social network groups, wikis, alumni meetings, and company-wide improvement projects with volunteering certified architects. This presentation gives an overview of how to manage these networking elements and discusses the effort behind it.
Have you ever been confused by an arrow in a box-and-line design diagram? Do you use the Unified Modeling Language (UML) in your software architecture? Have you ever wondered, “Where is the line between architecture and detailed design?” If you answered yes to any of these questions, this tutorial has practical and valuable information for you.
The goal is to show you what information about an architecture to capture so that others can successfully use it, maintain it, and build a system from it. Important takeaways from this talk include the multiple views of architecture; how we can use UML in each view and when other notations work better; what views we can use to evaluate performance, availability, modifiability, and other qualities; how to complement structural diagrams with sequence diagrams, statecharts, and other behavior diagrams; and guidelines and templates to make your architecture documentation more effective.
It can be dispiriting to find that what seemed to be a well-designed system, carefully implemented by an Agile team, runs into problems as soon as it hits production, but such things do happen. Conversely how is it that gnarled old systems containing tangled code and without a unit test in sight are often successful production applications and run reliably for years?
Today the DevOps movement aims to prevent problems when systems reach production by unifying the work of development, application management, and production operations staff. This is a terrific step forward, but we still need actionable advice that architects and development teams can apply to prevent this Dev–Ops interaction from being a frustrating and ineffective process.
This session will explore why good software development practice is important but ultimately isn’t sufficient to create a reliable and effective enterprise system. We will discuss what being “production ready” really means and then look at the design forces that this implies for our systems. This inquiry will allow us to understand the principles, patterns, and practices that architects need to know and apply in order to work with our Ops colleagues, get our systems into production, and keep them there.
This is a participatory session where users can see, touch, and use a variety of IoT hardware and sensors. Hardware and development environments will be provided so that participants can see how development is done on different platforms. We will also explore various languages and technologies used for this development. Multiple examples of boards will be provided for use during the session, including
This is the story of an organization that found itself in the midst of a crisis, struggling to meet project timelines while adhering to a strict high-quality bar, but seemingly unable to scale up to a challenging roadmap and evolving market as the project’s technical debt grew. Among the numerous task forces set up to handle the execution crisis, a software architecture team was formed, entrusted with creating a software vision befitting an organization that, regardless of the crisis, was required to grow its business into new market segments.
While facing several silo teams, each in turn facing a steady stream of new features to develop on top of a significant legacy codebase, the team of architects had to build trust with engineers as well as with managers, by understanding their pain points and providing value through pragmatic solutions. Eventually, more than just “architecting,” the team aspired to serve as the organization’s superglue, fostering collaboration across disciplines, projects, and teams.
In this talk, we will share with you our experience of building this software architecture team at one of Intel’s R&D organizations during the last four years, helping an organization without well-established software architecture practices to get out of the crisis and start building its software to meet its growing business needs. We talk about our successful (and our less successful) experiences in establishing the architect’s role and share our vision and practices.
A bridging system is a home-grown routing solution to route traffic to either a new or legacy system during data migration. The bridging system is used to support core systems transformation efforts, mergers and acquisitions such as Billing System Transformation, and Order Management System Transformation or Consolidation in various industries including telecommunication, transportation, and consumer/commercial products distribution. The bridging system architecture mandates scalability, flexibility, and high performance to minimize the impacts of system downtime and to make the system transformation seamless to end users during data migration. The bridging system can also be architected to serve as a data segmentation router to support an Active/Active data center model after the data migration is complete. This presentation will show what a home-grown bridging system architecture looks like; how it works end to end to satisfy business needs in terms of system architecture, the technology stack, major components design, and integration to impacted applications; and performance considerations.
The frameworks that formal architectures supply both enable and constrain certain types of design decisions and desired system qualities. Just like laws and cultural conventions, they embody the values and belief systems of people even though it may not always be obvious to the casual user or even to the original architects themselves.
Today the world is in the midst of laying down a communication and decision fabric that will connect trillions of people, objects, and intelligent machines into the most complex cyber-physical system that the world has ever imagined. The emerging Industrial Internet will connect systems that our lives depend on such as power systems, transportation systems, and healthcare networks.
It has been proposed that globalization has made the world "flat," but is the next generation of network and business architectures now destined to make the world "shallow"? Lost in a seemingly endless fog of distractions and short-term weak connections, will the urge to be connected overwhelm our ability to remain cognitive and free individuals? Has "The Medium" changed the message again without our having knowingly approved? As machines increasingly evolve with stronger forms of artificial intelligence and cognitive computing, they will become more human-like. More importantly as we become a collection of networked beings, will we become more machine-like, twitching and tweeting an endless stream of bits, or will we achieve a higher level of awareness and sophistication? What is our destiny?
Clearly it is a matter of architecture. The systems of systems we are building now will alter how we later process critical information, choose to innovate, and interact with man and machine in the future. In this talk, I will examine the formation of the Industrial Internet Consortium and the Digital Manufacturing and Design Innovation Institute with this in mind and compare them with respect to their stakeholders, business models, and emerging architectures.
This session is based on my experience leading GE Digital teams to develop solutions for external customers that incorporate the Business Model Canvas pattern in architecting microservice-based solutions. As in many software development organizations, adopting a cloud platform empowers our developers to develop, test, and deliver applications at an unprecedented rate. In addition, microservice-based architecture enables us to deliver scalable business-aligned capabilities and manage them much more effectively than monoliths.
In this development context, however, challenges remain:
In my talk, I’ll describe an approach to successfully address such challenges inspired by Design Thinking Business Analysis: Business Concept Mapping Applied, by Thomas Frisendal. I extend the author's work to product architecture and agile development to enable teams to align with a customer’s needs by validating for desirability, viability, and feasibility.
First, I will summarize the Lean Business Canvas, Concept Maps, and Domain-Driven Design patterns. Second, I will introduce the details of a novel lean approach to developing product architecture that combines Business/Lean Model Canvas, Design Thinking, and Agile practices. Architects will learn how to
The Industrial Internet of Things megatrend is generating massive amounts of data from a wide range of new data sources. The big source of new data is analog data, coming from sensors tied to machines, electronic devices, and the environment. Big Analog Data™ sources are all around us (light, RF signals, vibrations, temperatures, and so on). We will need new architectures and approaches to manage and process this data at the edge and in the cloud to extract insights and drive business impact.
Modern service-oriented architecture (SOA) systems force teams to reconcile a multitude of organizational and technology decisions. With each decision, the team reaffirms allegiance to its chosen message passing, platform governance, and quality assurance philosophy. Which side are you on: Centralized SOA or Decentralized SOA?
During this session, we will explore essential topics in modern SOA including governance, message passing strategies, orchestration, “smart” and “dumb” pipes, quality assurance strategies, deployment, and many other topics. For each topic, we’ll cover the most important information you need to know and debate the pros and cons of a centralized and decentralized approach.
And because it’s Michael and George hosting this session, we can’t just leave it at that. In the spirit of the American Federalists (strong central government) vs. Antifederalists (decentralized government) of the 1790s, George, playing the role of a modern Jefferson, will advocate for decentralized SOA while Michael, as a modern Hamilton, will attempt to convince you, the audience, that centralized SOA is the best path forward. This is a session you will not want to miss!
Want to kick-start (or improve) your visual presentation and documentation skills? Increase your visual IQ at the whiteboard during collaborative design sessions? Hone your architecture drawing capabilities in general? If you answered “yes” to any of these questions, this workshop is for you.
From brainstorming and whiteboard sketching to concept presentations and formal documentation, architects need nimble visualization skills to communicate quickly and clearly their architecture decisions and artifacts—without a lot of words or verbal explanations.
In this hands-on activity session, MJ and Amine will package their respective experiences in UX design and architecture to coach participants through a series of revelatory exercises that get past self-judgment and into effective visualization techniques applicable to architects. We will directly confront the sometimes daunting challenge of drawing in real time with practical tips for framing context and engaging others in visual thinking. We will focus on ways to build a visual lexicon and improve lettering skills … all while having fun. Participants will come away with a transformed attitude about their ability to draw their architecture artifacts and a new access to visual storytelling.
Software architecture is a key enabler of business strategy, and it must provide value. What guidelines exist for the business-oriented software architect, and how do they correlate with an Agile business model? Business involves a number of dimensions that the architect must understand, including negotiations, risk management, business analysis, and communication skills. But more importantly, the architect must be able to align the business strategy with the software architecture. He or she must document the architecture in a way that shows how the system context derives from the business context. Typically, though, the requirements documents gathered from stakeholders do not capture the quality attributes well enough.
To overcome this constraint, architects must evaluate the architecture by engaging stakeholders early in the software development process, negotiating the system’s priorities, and making tradeoffs. Scenario-based techniques like the Architecture Tradeoff Analysis Method (ATAM) provide one of the most general and effective approaches for achieving this. But to attain true business agility, a combination of the ATAM and the service-oriented architecture (SOA) pattern is suggested. The SOA pattern applies defined business functionalities built as software components to create interoperable services. SOA provides a well-designed interface protocol for integrating different services that are flexible enough to anticipate future change, a key feature for business agility. The role of software architects must not be limited to the technicalities of designing the software alone. This presentation suggests a method for architecting Agile businesses using the architecture practice at Konga, an African e-commerce company, as a case study.
Technology changes and new (functional) features are two common drivers for evolving a software system and, often, its underlying architecture. In these cases, an evolution roadmap that provides a sequence of tasks and activities with a clear rationale is needed and, furthermore, often demanded by management.
In our experience, producing such a roadmap can be challenging, even when the source and the target systems are known, due to various business, technology, and operational concerns at play that affect the evolution paths and hinder a global analysis by the project stakeholders. Thus, we argue that evolution planning must be a shared responsibility between managers and architects.
In this presentation, we discuss experiences in applying a set of architectural techniques—utility trees, scenarios, and architectural views—to perform architectural assessments of source and target software architectures with the goal of generating an evolution roadmap that facilitates a risk analysis and a convergence of the stakeholders’ decisions. These experiences are based on two projects centered on business process management (BPM) solutions and a third one involving a legacy core in the banking and telecom domains. As the main lessons learned, we can say that architecturally informed roadmaps help decision makers to understand
The time for IT to be a provider of big systems is over. Businesses know how to solve their problems, and IT should get out of the way and let them. The role of IT should be to provide software engineering competence and infrastructure to enable a business to create solutions for its challenges. Enterprise architecture and software architecture can be instruments that a business uses to understand where the solutions fit in the larger landscape and how best to utilize existing components. How can we provide this essential tool to the business in order to let it solve its own problems while we ensure a consistent software architecture and system landscape?
Like many organizations, Bosch is finding its way toward the Internet of Things (IoT). One obvious challenge is the lack of interoperability standards for the IoT. Are we designing the IoT by default? Will the resulting systems have technical, business, and social properties that we can be proud of? Cloud-centric architectures for IoT applications have drawbacks concerning responsiveness, multi-vendor fragmentation, and rampant threats to consumer privacy. We claim that similar levels of service, or higher, can be achieved with a system-of-systems approach.
Bezirk is an architectural framework for the consumer-space IoT being developed at Bosch. By building on a brokerless publish–subscribe middleware and promoting open, decentralized systems, the framework places greater emphasis on interoperability protocols, such as for context awareness and personalization, than on the features of “central” or “indispensable” components. By enforcing a user-centric privacy model at the middleware level, it shifts the power to control data exchanges from companies and application developers to end users.
This talk motivates and describes the framework and showcases applications under development. If your smart home is spying on you, go Bezirk.
Do you dream of working on a team of enlightened people who create software that users love? Stop dreaming and start living! This DEV@SATURN talk will get you started on your journey.
Zen is now ready to overtake our offices and enable our teams to create software together. I’ve spent more than 15 years in software development and gone from being a web developer, to a database administrator, to a project manager, to a business analyst, to an end-to-end solutions architect, to a product manager. The patterns I see repeated over and over again include teams missing deadlines, unhappy users, buggy code, and over-budget software. The reasons I blame for this are also the same time and time again: teams fighting, not understanding each other, being stuck in the past, not understanding the users, and overall stressful team environments.
This presentation lays out a step-by-step process to create a new awareness, first in yourself and then, you’ll be surprised to notice, in your team members.
The Internet of Things and “Industry 4.0” will profoundly change the world that we live in and the jobs of the future. It only makes sense—as parents, teachers, and software architects—that we make sure to consider how we are preparing the coming generation for these changes and career opportunities. As a tech-savvy parent who is raising two “digital-native” children (ages 14 and 11), I have seen firsthand how this upcoming generation interacts with technology. I have coached Lego Robotics; taught Arduino, introductory Linux, Minecraft server, and modding classes; and tried to get kids engaged in creative programming activities. But until now I haven’t been able to find a teaching architecture that ties it all together with a specific goal and purpose.
This session will discuss my experiences over the past six years as a parent-teacher of technology to this digital-native generation and will present, with live demos and live digital natives (kids), an IoT teaching platform based on Chromebooks, Linux, NodeJS, Arduinos, Raspberry Pi, NodeMCU, MQTT, Minecraft, Unity, and Node-Red.
In the Agile world, architecture is about making design decisions with just enough anticipation. Too much anticipation leads to overly heavy architectural constructs that may never be used (YAGNI); too little anticipation leads to expensive refactoring and potentially fatal build-up of technical debt.
In this session, we present an approach for Agile architecture roadmapping with just enough anticipation. The approach consists of principles and practices that help address questions like
We will present experiences from architects who have used this approach in practice in multiple organizations. Their experiences show more realistic stakeholder expectations and better prioritization of required architectural improvements.
The tutorial is based on Risk- and Cost-Driven Architecture (RCDA), an approach developed by CGI that has proven to support solution architects globally in a lean and Agile manner. RCDA is a recognized architecture method in The Open Group’s architect certification program.
Tactics are a set of generic design primitives that underlie software architecture design. Security tactics are a principled starting point in designing a secure software architecture. Because they are primitives, security tactics are inherently abstract. It is up to individual software architects, on their own, to refine these tactics to more specific design decisions. For this reason, they need guidance to facilitate and regularize this refinement process.
One form of this guidance is to provide explicit mappings between tactics and security patterns, which are refinements of security tactics: less abstract and closer to code. Identifying concrete relationships between tactics and patterns will save architects (who are not, in general, security experts) the trouble of drawing such links themselves. Such predefined mappings may also prevent architects from making incorrect refinements from tactics to patterns, and from there into code.
This participatory session will begin by introducing and familiarizing participants with the concepts of software security, security tactics, and security patterns. Then we will proceed to a group activity. The purposes of this hands-on exercise include
Conventional evolutionary prototyping for Small Data system development is inadequate and too expensive for identifying, analyzing, and mitigating risks in Big Data system development. This article presents RASP (Risk-based, Architecture-centric Strategic Prototyping)—a model for cost-effective, systematic risk management—and shows how it is deployed in Agile and Big Data system development. The RASP model advocates using prototyping strategically and only in areas that architecture analysis cannot sufficiently address. In RASP, less costly MVP (minimum viable product), throw-away, and vertical evolutionary prototypes are used strategically, instead of blindly building full-scale prototypes. The RASP model is validated in an embedded case study of nine Big Data projects with a global outsourcing firm. A decision flowchart and guidelines distilled from lessons learned for whether, when, and how to do strategic prototyping are provided.
In the domain of Internet of Things connectivity and data processing for medical devices, an enterprise IT project of significant size was experiencing problems with reliability and performance. Three reviews were performed on this project, in various stages, using the Architecture Tradeoff Analysis Method (ATAM):
This experience report contains lessons learned from
Today’s software delivery teams are expected to operate at internet time and scale. These expectations have broadened the adoption of Agile and Continuous Delivery practices. As a result, the pendulum has swung away from traditional software architecture practices and, in particular, enterprise architecture. We do not believe that the pendulum will swing back to these traditional practices. However, practitioners still need an architectural approach that can encompass Continuous Delivery and Agile practices, providing it with a broader architectural perspective. Continuous Architecture addresses this concern.
Continuous Architecture is based on six principles and a set of tools that support them. It is not a formal process; rather, it is based on practical experience of architecting solutions in large enterprises. As enterprises struggle to implement Agile and Continuous Delivery practices at scale, Continuous Architecture will become more important. Adopting a flexible but structured architecture approach is critical to finding the right balance.
Analysts at Sandia National Laboratories who have been tasked with answering strategic questions posed by executives, program managers, and sponsors typically perform data analysis studies. However, the challenges of data provisioning for these studies—data collection, integration, quality, and other considerations—are complex and time-consuming, limiting the number of studies performed (and therefore questions asked) at any given time, as well as making the studies long, drawn-out affairs. When the analysts provide answers, the data they used is no longer current, and the results represent a look back at a point in time, resulting in significant additional effort to “refresh” the results.
Sandia’s response to this problem was to architect and build a platform that we called “Analytics for Sandia Knowledge.” This platform utilizes a set of tools, techniques, and custom web applications to create a centralized, integrated, and virtual data repository designed for data analysis and visualization. This platform has significantly reduced Sandia’s “time-to-answer” for data studies, allows more data studies to occur, supports security and privacy needs, and allows us to create sustainable and up-to-date visualizations of data study results. The architecture can support both traditional and Big Data approaches and lays a foundation that is scalable, extendable, and adaptable.
This experience report will provide an overview of the architecture goals, quality attributes, final design, and some lessons learned along the way in creating the virtual data repository and data visualization platform.
In the international standards for architecture descriptions in systems and software engineering (ISO/IEC/IEEE 42010), “concern” is a primary concept that often manifests itself in relation to the quality attributes or “ilities” that a system is expected to exhibit—qualities such as reliability, security, and modifiability. One of the main uses of an architecture description is to serve as a basis for analyzing how well the architecture achieves its quality attributes, and that requires architects to be as precise as possible about what they mean in claiming, for example, that an architecture supports “modifiability.” This presentation describes a table, generated by NASA’s Software Architecture Review Board, that lists 14 key quality attributes, identifies important aspects of each quality attribute, and considers each aspect in terms of requirements, rationale, evidence, and tactics to achieve the aspect. This quality attribute table is intended to serve as a guide to software architects, software developers, and software architecture reviewers in the domain of mission-critical, real-time embedded systems, such as space mission flight software.
Netflix is the biggest internet business in the United States. At peak hours, its downstream bandwidth usage climbs to nearly 37% of internet traffic. Netflix’s success is based on modern, efficient, and robust technologies, frameworks, and architectural concepts. Should we follow its lead and refactor our systems into microservices, split up big databases, and introduce reactive programming, or should we use polyglot approaches?
To answer these questions, we conducted an architecture evaluation based on the well-known Architecture Tradeoff Analysis Method (ATAM). Only this time we applied it in an inverse manner. Starting with the observable architectural approaches, we extracted those requirements and quality attributes that would provide a perfect fit. We reverse-engineered a utility tree and extracted important architecture tradeoffs.
Our findings give a good understanding of the pros and cons of current technological trends and hint on their applicability in different contexts. In addition, they demonstrate the opportunities that evaluation methods like ATAM offer. Applying the ATAM to a real-life system that almost everybody is familiar with leads to comprehensible conclusions and vivid practical insight.
In this talk, we will describe a business need for data collection and streaming that led to implementation of a data-streaming solution for delivering messages from many clients to various end points including Kafka, HDFS, Mongo DB, and Splunk. The talk will walk attendees through the evolution of the solution from local use of Active MQ and relational databases to a real-time data-streaming solution: Pivotal’s Spring Extreme Data (XD).
We will cover scalability considerations, observed performance, and security considerations. We will describe why the tool was selected, the various solution patterns implemented and discarded, and the final solution topology. We will also discuss deployment considerations for the public cloud, including the secure transport of data between data centers.
The U.S. Air Force has expanded its software divisions more than any other area consistently since the 1980s. The current Air Force policy through the Air Force Sustainment Center Way doctrine dictates that the efficiency and productivity of software development and maintenance are the factors of highest importance. Unfortunately, it remains difficult to evaluate with accuracy any metrics for the efficiency and productivity of these efforts. Furthermore, due to the evolving nature of the information environment, large-scale and small-scale programs have a hard time with sustainability.
In this presentation, an example program is analyzed for sustainability based on its lifecycle as it progressed through the Air Force’s process of being developed and maintained over a full year. The metrics and process follow the Software Engineering Institute’s Capability Maturity Model Integration (CMMI) and Agile software development. By using both development tools together, the performance factors of efficiency and productivity achieved much better results. Using data collected throughout the program’s lifecycle, this presentation shows which metrics yielded the best feedback for optimizing the process for the program. Statistical analysis of these metrics gives a deep dive into how the implementation of both CMMI and Agile work well for organizations like the Air Force to produce very adaptive, cost-effective software rapidly and with maximum efficiency and productivity.
Good quality requirements help you make the right architectural decisions, but collecting your requirements is not always easy. The Quality Attribute Workshop helps teams effectively gather requirements but can be costly and cumbersome to organize. The mini-QAW is a short (a few hours to a full day) workshop designed for inexperienced facilitators and a great fit for teams practicing Agile methods. Variants of the mini-QAW exist for both face-to-face and remote collaboration. The mini-QAW method has been used successfully by several groups throughout the world and is finding its place as a standard tool among many software architects.
During this session, we will walk participants through a mini-QAW simulation. Participants will learn about and apply some of the core mini-QAW activities, including scenario brainstorming with a “system properties web,” creating stakeholder empathy maps, and visual voting. The mini-QAW combines these activities with a tuned agenda (compared to the traditional QAW) to create a fast, effective, and fun workshop that many teams can easily adopt and succeed with. By the end of the session, participants will have gained first-hand experience facilitating and participating in the workshop that will let them use the method with their teams back home.
Gamification is a management technique that is growing and having great success. It brings motivation, engagement, and really quick results. Come with us, learn something about the concepts behind the methodology, and see how a software development team discovered this practice and used it in a great variety of problems and scenarios. And almost succumbed to the dark side ...
Through the games, we can achieve unbelievable results, but sometimes individuals also turn to the dark side. After all, we are interacting with the passions of the people. This session covers the following topics:
Also in this session, we invite you to experience gamification in practice and play a funny and engaging game. Learn and feel the power of the Force.
Are you prepared?
Private clouds have gotten much recent attention, with the enterprise becoming talented in single application hosting for a seamless experience. But what about the holistic enterprise? A recent Gartner report found that by 2016, cloud computing will become the bulk of new IT spending; by the end of 2017, nearly half of large enterprises will have hybrid cloud deployments. The journey starts with asking “Why” and defining the vision instead of the mechanics. By identifying the imperatives and underlying drivers, architects can determine how to measure the value of cloud integration. While there is no single formula for cloud adoption, creating a successful hybrid cloud involves strategic and tactical considerations that reach beyond IT to meet the overall business vision.
In this session, hear lessons learned from an international consultancy with experience across multiple business sectors. Leading practices include the following:
With Event Sourcing, there is an alternative way to build web applications that is simple, scalable, extensible, and elegant without an object-relational map (ORM) anywhere in sight. As a bonus, we even get a free time machine.
Event Sourcing is a very old way of thinking about modeling—other industries have used it for centuries. Rather than modeling the current state of an object, we model the events that have changed it and derive the current state from the events. A canonical example is that the current state of your bank account balance is simply a calculation of all the debits and credits that have happened to the account.
This talk will explain what Event Sourcing is, how it differs from ORMs, and why you should consider using it. You will hear about practical examples that highlight differences and how you can incrementally move in this new direction.
Software architecture has enormous influence on the behavior of a system. For many categories of systems, early architectural decisions can have a greater influence on success than nearly any other factor. After more than 20 years of research and practice, the foundations for software architecture have been established and codified, but challenges remain. Among other trends, increased connectivity, a shift to the cloud and to mobile platforms, and increased operational and market tempos have precipitated the need for changes in architectural practices and decisions. This talk shares a perspective on the history of software architecture, trends influencing the need for change and the related architectural challenges, and some applicable research and practices.
In 1996, Mary Shaw and David Garlan effectively established software architecture as a systematic discipline to reason about software at a higher level of abstraction. Less than a decade later, the architecture discipline came under severe attack by the Agile movement. In the first decade of the millennium, many perceived up-front design as wasteful and obstructing agility. Before answering questions about what makes a good architecture, architects were forced to consider why architecture was needed in the first place. The answer emerged (like many good architectures) in the years that followed. It became clear that under some conditions, developing software without proper architecture can be very risky and costly. Nowadays, the relationships among risk, cost, and architecture are firmly established. Architecture is needed to manage risk and cost associated with complex systems, answering the why question and giving a basis for new answers to the how and what of architecture practice. At CGI, we collected these answers in our Risk- and Cost-Driven Architecture (RCDA) approach. This talk is about how we did this, the results we observed, and the challenges ahead.