2015 Institutional Innovation Report
Download The Report
Learning and Performance: Learning by Doing
A starting premise of the Roundtable on Institutional Innovation is that the most critical factor in determining the success of organizations is not achieving economies of scale, which was the chief goal in the 20th century, but to provide scalable learning. But what kind of learning is needed and how can it be scaled?
According to Maryam Alavi, Dean of the Scheller College of Business at Georgia Tech, the way companies typically “learn” is by codifying procedures in order to minimize uncertainty and avoid mistakes. Most corporate training still takes place through formal instruction, either in a classroom or more recently, delivered online to workers’ desktops, a process that is designed to convey a specific body of knowledge or skill set to employees.
But the kind of learning that is key to operating exponentially should focus on improving workers’ ability to solve problems and make good decisions, rather than simply acquiring lower level skills or knowledge. It typically involves acquiring tacit knowledge as well as explicit knowledge. How can this sort of learning best be delivered at scale? The short answer, according to Maryam Alavi is by deliberately designing jobs to enable workers to learn by doing and by deploying technologies that can accelerate learning.
Supporting Learning through Work Design and IT Tools
First it is useful to understand how learning happens. The research literature on learning identifies two different types of learning: cognitive and social. The first focuses on what goes on in students’ heads when they learn; the second is concerned with the environment that best supports learning.
The goal of cognitive learning theory is to elucidate the mental processes involved with learning: it focuses on the relationship among factors such as perception, attention, information processing, reflection, drawing inferences, memory and recall. According to this theory, learning is most effective when it is a constructive, goal-oriented process that happens most effectively during solving real world problems that provide engagement and a sense of purpose. By emphasizing intention and motivation, this concept of learning is very different than the “empty vessel” approach, in which students progressively acquire knowledge. It acknowledges that learning truly new skills is hard work, and that higher level learning in particular requires a lot of effort and can be frustrating. Without sufficient attention to students’ motivations, this kind of learning can be difficult to achieve.
Social learning is based on the premise that the most effective learning is not a solitary activity but is best accomplished in cooperative activities with others that involves asking and answering questions, sharing information and participating actively in learning communities.
Both approaches to learning are important, and they are not mutually exclusive: there must be cognitive changes taking place in a student’s mind in order for learning to take place, and social interactions promote engagement that supports cognitive learning.
The most effective learning is connected directly to a worker’s job, and the best way to support and accelerate both cognitive and social learning is through the way in which work is designed. Research has shown that specific work attributes—perceived significance of one’s job, the variety and challenge of work assignments, autonomy to find the best solution to problems, timely feedback from managers and peers, and participating in small teams—can lead to psychological and social states that promote learning and performance and can contribute to positive affect in organizations.
In addition, there are a number of emerging information technologies that can be deployed to further enhance learning: Automation can take over the mundane, repetitive and boring aspects of work, freeing workers to concentrate on more challenging tasks. Simulation models and machine learning can support cognitive learning by enabling workers to explore and understand relationships among multiple interacting variables. In addition, data analytics provides tools to draw inferences from and recognize patterns in big data sets. And social media can facilitate team building and team interactions that include information sharing, giving and receiving feedback, asking and answering questions. In what is truly a win-win situation, the same work design principles and IT tools that are key to improving work performance are also important ingredients in promoting continuous learning. In fact, the two objectives go hand-in-hand.
Alavi added that the most effective learning is “scaffolded”—that is, it happens under the supervision of a mentor who is able to give learners timely feedback. This kind of structure is vital to good learning; efforts by teams to learn that are undirected and unsupported are generally less successful.
Finally, Alavi pointed out that the process of “unlearning,” which is an important ingredient in keeping up with a changing environment, does not involve “erasing” old information from the brain, but rather recognizing that an existing mental model is no longer accurate or adequate and building a new mental model (See Sidebar, Emergence of a 21st Century Infrastructure). This is a distinctly different (and more challenging) process than incremental learning, where new knowledge is used to elaborate an existing mental model.
Emergence of a 21st Century Infrastructure
As digital technology evolves, it not only gets cheaper, faster and more powerful, but it enables entirely new ways of operating. A series of such paradigm-shifting breakthroughs, many that build on previous technologies, is creating a new infrastructure for the 21st century that is fundamentally different from the infrastructure that prevailed in the last century.
John Seely Brown has illustrated the need for continuous unlearning, as well as new learning, through his efforts to keep up with the evolution of technology. Over a period of a dozen years in at least six different domains, he has had to discard old assumptions and understandings about how technology works and acquire entirely new paradigms. (Each of these tech areas is fairly technical and relatively complex; each is described here with just enough detail to explain the nature of the shift. More information on each is listed in footnotes.)
From Two-phase Commit to Eventually Consistent. When computers began to support multiple applications and multiple remote users, a need arose to find a way to synchronize transactions to ensure consistency of data. The two-phase commit protocol accomplished this by providing for an orderly process for gathering requests for a transaction (phase one), then making a decision to commit to or abort that transaction (phase two). With the growth of distributed databases that serve thousands or millions of customers simultaneously, a different approach to ensuring consistency was needed. A new protocol, known as “eventual consistency,” was developed initially by Amazon.com to postpone the total consistency check till the end of a complex set of transactions such as checking out with a shopping basket.
From Client-server to Cloud. Although cloud computing might seem to be just a new version of the old client-server paradigm, in fact it raises entirely new challenges that call for new solutions. For example, as Netflix understood ahead of many of us, in cloud computing one must design for failure with the spirit of graceful degradation, which requires a lot of thought about what to do when (not if) a component fails. General principles include:
- Fast Fail: Set aggressive timeouts such that failing components do not make the
entire system crawl to a halt.
- Fallbacks: Design each feature to degrade or fall back to a lower quality
representation. For example if we cannot generate a personalized list of
movies for a user, fall back to cached (stale) or un-personalized results.
- Feature Removal: If a feature is non-critical and if it is slow, remove it from any
given page to prevent it from impacting the user experience.
Also consider designing with N+1 redundancy in mind. In other words, allocate more capacity than you actually need at any point in time to provide the ability to cope with large spikes in load caused by member activity or the ripple effects of transient failures, as well as the failure of up to one complete zone.
From CPU to GPU + CPU. The “brains” of every computer is its central processing unit (CPU) that executes the instructions of a computer program by performing a series of arithmetical and logical calculations. The earliest stored-program computers relied on CPUs, and succeeding generations of personal computers were based on microprocessor-based CPUs. Initially, displays were just text on monochrome screens that could be supported by CPUs, but beginning in the 1970s and 1980s, arcade games, and then personal computers began to integrate specialized graphic processing units (GPUs) to provide increasingly sophisticated visual capabilities for such things as video games, graphic user interfaces, and photo and video editing. GPUs are highly parallel processors that run moderately simply instructions. The key to maximizing their performance is to lay out information in the internal memory of the chip so that instructions can be effectively streamed to each processor with almost no latency. Unlike the past (except the distant past—see below), understanding the geometric layout of information in memory in a GPU is important. (On the truly ancient IBM 650, getting optimal performance entailed considering where information was stored on a rotating drum. So, in a way GPUs represent a return to those early days of computing, but now infinitely more complicated.)
From SQL to NoSQL. Structured Query Language (SQL) is a programming language designed to manage data in a relational database in which information is stored in rows and columns. As relational databases became the dominant standard for databases, SQL became the most widely used database language. SQL databases required pre-defined data schemes/models that structured the data in order to handle a wide variety of queries efficiently. NoSQL (a name first coined in 1998) refers to a radically different approach to databases where both structured and unstructured big data can be accessed without first being converted to a schema model. Many of the techniques for dealing with SQL regimes do not map over very well to the more free-form NoSQL regimes, which, for example, allow for a rapid exploration of data to find the most appropriate model for schematizing the data.
From Desktop to Mobile First. Early computers filled entire rooms. As they shrank in size, they came to reside on individual desktops. Then, as computing devices got even smaller and wireless broadband networks grew, computers were no longer tethered to the desktop. Now smartphones and tablets are providing the main connection to the Internet, first for millions, then for billions of people who may never own a full-fledged computer. Organizations that grew up providing content for PCs have found themselves needing to shift their focus and adjust their mindset to a world dominated by mobile apps. For example, it is now possible to tap the power of geo-fencing to determine what information should be streamed to a device or even allowed to be displayed on it depending on its location. Exploiting such opportunities requires new ways of thinking, but these may be incompatible with organizations’ established practices or their legacy ERP systems.
From Defense to Offense in Cybersecurity. When the Internet was first launched, it linked a tiny number of researchers and computer scientists. Because use was limited to a small community, virtually no attention was given to the issue of security. As the Internet grew to connect the entire world, the lack of security provisions in its fundamental design has become a bigger and bigger problem. Today, virtually any entity connected to the Internet can expect to be attacked by malicious unknown parties from anyplace in the world. Cybersecurity experts have concluded that taking purely defensive measures and waiting to be attacked is no longer sufficient; government agencies and large companies are actively exploring options for taking the offensive in responding proactively to threats, or even constructing honey pots that attract cyber attackers in a way that enable the attacked to determine a signature of the attacker.
One characteristic shared by several of these innovations is that they are not the result of academic research but rather came about as a response, often by a commercial venture, to a real-world problem or constraint. It is also true that the old paradigms do not completely vanish but continue to be relevant in certain contexts.
Reports from the Field
How can big established organizations that typically have large HR departments that provide conventional corporate training move to new forms of embedded exponential learning? Roundtable participants described some of their efforts:
When he was at Marriott Corporation, Tony Scott, now the United States Chief Information Officer, ran a portion of the company’s Great America theme parks. He experimented with getting the park managers to play a lemonade stand game that ran on Apple II computers, but he found that while the game was useful in teaching certain basic skills, it was not particularly effective in teaching business strategy or how to run a team.
At Target, Casey Carl implemented an “Action Learning for Leadership” program that brought a group of five or six young staff members together for a week of intensive activity in which they had to work together to solve an actual business problem, working under an executive who is actually accountable for solving the problem, which is linked directly to the firm’s overall corporate strategy. One key ground rule that helped shape the participants’ experiences was that on the first day of the program, they were only allowed to ask questions, not even begin to think of responses. The goal of the rule was to teach the participants to “fall in love with the problem” rather than with a particular solution to the problem.
Tom Rosenstiel also emphasized the importance of asking questions. Getting to exponential learning requires asking the right questions. But most companies ask the wrong questions that lead to incremental rather than more far-reaching innovations. What organizations need to do is to ask questions such as “What function do we perform in people’s lives? What problems do we solve? How could what we do be done better if we did not exist?”
John Hagel and John Seely Brown of the Deloitte Center for the Edge both pointed to the worlds of online gaming and extreme sports as places where extreme learning takes place. The best learning is an adventure: as young gamers put it, “If I ain’t learning, it ain’t fun.”
Andy Billings of Electronic Arts agreed, noting that almost all learning is based on games. When involved with a game, the best players make note of the consequences of their decisions. In game-based instruction, a mentor is able to pause a game and ask the players to consider how they can apply what they are learning to their real-world assignments.
A key to accelerating learning is to organize learners into communities of practice—groups that are typically no larger than 10 to 15 people who share deep trust with each other and are connected to a broader learning platform. These platforms are making it possible to scale learning beyond an individual enterprise to a larger business ecosystem by building “collaborative creation spaces” that can include millions of individual learners. Another powerful learning strategy is to learn from adjacencies. In their book, The Power of Pull, Hagel and Brown describe how a group of youngsters in Hawaii who were determined to become world-class surfers learned important lessons from skateboarders and motocross participants.”1
1 John Hagel III and John Seely Brown, The Power of Pull, Basic Books, 2012.