Peer Learning

“Just make it more interactive!”

Bart Doorneweert

The  flight from Amsterdam to Jakarta is 16 hours, but it feels even longer  when you know you have to fly right back. I was on the hook to present a  research project on growing bell peppers to the Minister of  Agriculture.

I was expected to renew its funding.

But we knew that the 600,000 EUR pilot project only managed to get 2 farmers involved.

And we knew that the project had sold exactly zero bell peppers.

The  engineers said they’d done their job, and it was up to the economists  to find the marketing plan, except the lead economist had just  disappeared.

So a day before, they’d dumped this hand-off on me, a junior researcher.

The  technology worked. Automated greenhouses grow things faster, better,  without chemicals. But Dutch agriculture technology always works in the  case of Holland, not necessarily abroad. So, making a greenhouse  business work in other places requires more than just engineering. Which  is why the government invests a lot in exporting Dutch agricultural  technology with projects like this.

Indonesia is roughly the size of Greenland, but with a population of 250 million, and half their economy is agriculture.

The  pilot had some wins to show on the engineering end; it produced bell  peppers. On top of that, it made a great case for investment, at least  on paper. Greenhouses are a big up-front cost, but because bell peppers  are a delicate crop with a limited shelf life, they promise high  margins.

But we knew that Indonesians don’t eat bell peppers.

And we knew that Indonesia didn’t have the transport infrastructure to cost-effectively distribute any kind of pepper.

Sambal  hot sauce is the pride of Indonesian daily cooking. It’s made with  chili peppers, for which Indonesia’s climate is perfect for growing. But  because production is scattered over Indonesia’s 17.508 islands, it’s  cheaper to import chilli peppers to make sambal.

There  are always formal protocols for handing over projects between  governments. This one would be handed over at a closing ceremony at the  Ministry in Jakarta.

A  U-shaped table in a formal meeting room would be setup, with seats  assigned with name tags, and a jar of water and breath mints to be  shared by each pair of attendees.

The  Minister would kick-off the meeting with a welcoming, followed by the  representing foreign diplomat, in this case the agricultural counselor,  with some words on the strength of collaboration between countries.

The  project leader would present some of the activities, and results. In  his wake, the 2 farmers who participated in the pilot project, would say  a word of gratitude about the grant they received.

Then  the senior government representatives would ask questions, and discuss  the project. They’d reserve their decision on whether or not to renew it  for a written response later.

After the formal discussion ceremony, the Minister would leave for a next obligation.

For  the rest of the participants there would be an informal walk-about exhibit, showing the tangible technology accomplishments of the project.

The  ceremony is organised like this to empower the Minister to make the best decision possible. In principle, the discussions question the merit  of the project itself, and the Minister should have clear knowledge of  the project’s effectiveness before deciding.

But  there are other factors for the Minister to consider too — government budgets, policies, diplomatic relationships, etc. — so this process also allows the Minister to make this call discretely, later, outside of the  room in which the ceremony dynamics take place.

Every  stakeholder that is responsible for the project is present in the meeting minutes, but only the most senior representatives speak. This is to respect seniority, but usually they haven’t been too involved in the project itself. They’re also all incentivized to continue the funding.  For the university, it’s more money to keep staff. For the Indonesian  government, it’s to keep foreign relations and continue receiving the  benefit of foreign investment. So the discussions and questions stay at a  high level, to remove the potential sting of contradictory details or  stepping on the wrong toes.

This  also creates insurance that should the Minister’s decision backfire,  the responsibility lies with the stakeholders that were present.

So everyone knew that no hard questions would be asked.

Which is why in spite of the fact that the pilot didn’t work, the university expected the project funding to be extended.

I  had been briefed on all of this, and told exactly how much was riding  on successfully extending the funding. This could mean another 600,000  EUR to keep 5 of my colleagues working for 4 more years. Or not.

The  first clue that the Minister wouldn’t show up came on the agenda that  was circulated the day before. There was an asterisk beside his name.  But he appointed someone from the ministry in his stead, and the  ceremony proceeded.

No  difficulties were mentioned during the presentations. Throughout the  polite exchanges, and even in the informal discussions afterwards,  raising them would break protocol and instructions from the university.

After the ceremony, we all shared goat satay and a few bottles of Bintang beer, and I was on the flight home.

The  Minister decided to extend the project for another 4 years, with the  explicit request to scale up adoption of the greenhouse technology for  bell peppers.

Eight years later, the project continues, but still no reports on bell pepper sales.


Putting a discussion on the agenda doesn’t mean a meaningful exchange of knowledge happened.

The  example above seems like an extreme case, but it’s actually common.  It’s not just formal procedures — the same oversights happen at  conferences, universities, startup accelerators. Even the most  “interactive” workshops often fall into the motions of energetic  discussions and activities, and still fail to make deeper educational  impact.

In order to make well-founded choices for education programs, and avoid the common fallacies, we need to:

1. Take a hard look at what you’re doing now, and spot weaknesses or disconnects from the learners perspective.

2. Make it clear which areas to improve next.

To  support this type of evaluation, we’ve created the Peer Learning  Pyramid. It’s an evaluation model for any peer learning environment that  reveals its strengths, inconsistencies, as well as appropriate next  steps.

It’s based on the 3 key qualities we describe in “What is Peer Learning?”:

Responsiveness — the program’s ability to systematically assess and provide knowledge as and when it is needed. At first, that means getting the learners’ goals and questions  up-front, and at a more advanced level, it requires diagnosing each  learner before they begin.

Programs  start by becoming more responsive when the education content is  calibrated to learners’ needs. Ultimately, responsiveness takes form as a  diagnosis through mentoring conversations with the learner.

Agency — flexibility  given to the learners in deciding their learning outcomes. This starts  with inspiring and empowering the learner to self-direct, and then  builds on that self-direction by switching to a supporting rather than  directing role.

Agency  evolves from a push by the educator to shape the learner’s mindset, to  pull-based learning, where the learner takes full control.

Hyper-connectivity — being  able to make meaningful connections that jump outside of the learner’s  network, to the most relevant sources. This starts with brokering select  connections on the learner’s behalf, and evolves into creating  environments where they can make those connections instantly by  themselves.

Opening  up connections to expertise for the learner takes active introductions  by the program, but when learners are fully self-directed, connections  come through facilitated encounters.
The Peer Learning Pyramid (PLP) model

Hyper-connectivity,  or systematically connecting learners to others on their path, only  makes sense when they have a path. That means they have Agency within  their learning environment, but also that the program itself is designed  to listen and respond. So Hyper-connectivity depends on a foundation of  Agency and Responsiveness.

So  the PLP is a stacking pyramid. It shows you how to build the foundation  first, and then build upwards. In many cases, we see the promise of an  educational event fail because it makes connections but assumes it knows  the learners’ goals, or allow the learner to act effectively.

By  assessing an education program on each of these factors, it helps us  see if we’re going through the motions without effect. It helps to  determine if relevant learning actually takes place, and if not, how to  fix it.

To demonstrate how this model works, let’s use it to evaluate a few different educational formats:

Lecture programs

Lecture-driven  programs excel at topics where the learner needs little input into  their learning journey. They typically don’t sense learners’ needs or  adapt to them, connect them to other experts than the lecturers, and  give little or no agency to the learner regarding the topics.

For  topics and disciplines that don’t change much, this inflexibility  allows the lecturers to improve their delivery through practice, and  polish their content over time. But in faster-changing situations,  choosing lectures as the default education format is a liability.  Lectures become ineffective when they’re not relevant to the learners  current challenges. Learners tune out. This is the cause behind  lecturers concerns about creating engagement in their classes.

Lectures  are typically one-way education, but sometimes lecturers start by  asking the students questions, either about what they want to learn  today, or assessing them with a show of hands. This is a form of  Calibration, so on the PLP, we can rate Responsiveness at Level 1  (Calibrated) in these cases.

However,  lectures don’t enable learners to decide how they would like to achieve  their learning goals, so we rate them at Level 0 (No Agency) for  Agency. They also don’t involve other experts to address these needs, so  we evaluate them as Unconnected — Level 0 on the Hyper-Connectivity  scale.

We  can see why lectures leave much to be desired from the learners  perspective, and also how to improve them. Quick wins lie in becoming  more responsive; what about doing a post-up for the questions tied to  today’s lecture to get a sense of who’s struggling with what type of  issues? Or, if the class is too big for a post-up, how about sending out  a quick online survey to participants before the lecture? If a few  students have relevant projects, why not bring them to the podium and  work with them or coach them, for all to benefit?

All  these types of actions will help the lecturer to collect an impression  of what’s on the mind of learners as a form of preparation, and put  emphasis on those particular topics.

Panel discussions

The  idea behind the panel discussion is to get a good conversation going  between experts, to flush out the most relevant topics of the day. The  reality is that most panels become a set of small talks, with each  panellist pushing a pre-determined agenda, or responding to the  moderator’s set of irrelevant questions, which are actually just meant  as fall-backs. Panel discussions often result in some guy rambling on  about something irrelevant to the audience, and are consequently the  subject of scorching commentary in the back channels where the event is  discussed.

Many  panel moderators don’t like to involve the audience, and they have good  reason. Audience questions are often irrelevant, or they “like to state  a point related to the topic, rather than asking a question”. Audience  involvement is seen mostly as a distraction, and because of it, panel  discussions restrict learner participation.

Let’s  look at panels from the learners perspective. Panel discussions do tend  to attract learners with clear learning interests. They recognise the  experise of the panelists, usually because they are aware of the state  of the art, and sometimes because they are working on their own  projects. But, contrary to this high level of learner agency, the panel  format leaves learners powerless to express themselves, or direct the  discussion to something meaningful to them.

On  top of that invited panellists are usually outsiders, either from a  different place or more progressed than their audience, which makes the  divide between panel and audience even bigger.

This  is frustrating for learners. So when it’s question time, both the  frustration and the built-up pressure to try to learn something  relevant, turn Q&A’s into the crazy spouting pressure release valve  phenomenon that panels are notorious for. The problem isn’t the  audience, it’s that they haven’t been given a voice except for a short  question right at the end.

The  situation is frustrating for panelists too. They’ve been put in a  format that creates an antagnistic exchange, so they naturally choose to  hide out in the speaker lounge after the session.

How  do panels score on the PLP? Well, they fail to make a mark on  Hyper-Connectivity, because they don’t facilitate the creation of  relationships with the learners (Level 0, sorry).

Do  the learners have the opportunity to specify the topics of discussion,  other than asking a question? Does the learner have any say in how to  interact with the expert? No to both, hence zero’s all around for  Responsiveness (Ignorant), and Agency (No Agency).

Targeted  calibration on questions learners create more engagement. Could you  find specific challenges amongst them to be discussed by the panel? Did  it start with any attempt to assess their learning goals, or even  understand why they are in that particular room?

So,  what if instead of putting the panelists on centre stage, we consider a  challenge that a learner might have? When the learners’ goals are  visible to all participants (both audience, and panelists alike), it’s  likely that the panellists won’t be the only relevant experts who have  something to contribute.

The  audience includes people with local experience who are more empathetic  to the goals and assumptions of learners and their priorities. Putting  the learner in the center now creates unity amongst the interests of all  participants, and provides a more constructive learning context  overall.

Panels  will benefit a lot if their Responsiveness is increased by activities  that identify people who have relevant issues for the whole group of  participants to discuss.

Large conferences

If  you’ve ever had the opportunity to experience a conference of thousands  or tens of thousands of people, you’ve probably left with a sense of  awe or excitement. Large conferences are a unique type of learning  experience because of who you can meet. They attract a broad, diverse  group of people with some common interest. The potential is in the  people you meet, who you’d probably never discover otherwise.

In  order to deliver on the promise of bringing people together, conference  organisers need to attract a crowd. The way they draw in the crowds is  by committing big name speakers to the event.

Yet,  despite the crowd being present, the success of a conference is still  fragile. If the talks are bad, the participant feedback for the  conference as a whole will reflect that. On top of this, a substantial  part of the conference attendees leaves the conference without really  having gotten to meet anybody new at all.

Conferences  hold a promise of hyper-connection, but that can only be delivered if  the conference were to be more responsive. The freedom to shape the  agenda is an issue. How can you give more control to the participant to  shape their own program at the conference, and actually help them  succeed in meeting relevant new people? The PLP reveals where this risk  to success of a conference comes from.

Because  so many people with common interests and challenges are drawn together,  the Hyper-Connectivity potential is Instant, with a maximum rating of  2 — there could be almost no delay for the learners to find highly  relevant people to their specific learning goals. But, because  conferences are designed around the speakers, this is where misalignment  with the learners’ needs happens.

Even  at conferences that promote themselves on who you meet, the physical  space induces people to find a seat in a dark lecture hall, listening to  a speaker, not engaging with like-minded people. The content for the  talks themselves are also set by the individual speaker, not the  learners. At their best, the talks are informative. As a result,  peoples’ time allocation for the conference is imposed on them. This  centralised control over the setup of the program make Large Conferences  ignorant (Level 0).

The  only agency Large Conferences allow to the learner is selective (Level  1) — to choose which talks to attend (or not). They don’t allow the  learner to specify, direct topics, but still push the learner to curate  their own learning in some way. No wonder everyone always looks forward  to the coffee breaks in the program!

Thus,  although conferences promise Hyper-Connectivity, they offer little to  participants to support it. And, the paradox of the conference is that  all the talks actually prevent people from meeting.

So  while conferences have the potential for connecting everyone to deeply  relevant people, what we call instant hyper-connectivity, this  systematically fails because the underlying levels of responsiveness and  agency don’t support it.

Unless  conference organisers start relinquishing control to the attendees,  providing ways for people to share their agendas and the time and space  to start relationships, they will not realise the promise of  Hyper-Connectivity.

Learners  need to have a say in how they interact with the vast expertise around  them. Also they need guidance to connect with the right people at a  conference. So, rather than attempting to feed them with speaker  content, efforts should focus on facilitating connections.

A  big step towards this are the options to break out into smaller groups  that participants can choose from. If the issues that are addressed in  those groups are diagnosed, and show a significant levels of  representativeness for current concerns of the community at large, the  learner will be able to find their way to the relevant wisdom she needs.  Only by increasing Responsiveness, and building on the Agency in the  room, will Hyper-Connectivity be achieved.

Retreats

Over  the last decade, the conference industry has grown and now largely  competes on size rather than content or quality. A response to that is  the emergence of smaller retreats, where a group of less than 20 people  are chosen to meet in a comfortable, secluded environment for a few  days.

Retreats  are usually quite unstructured, allowing the participants to bring  their own agendas and start meaningful conversations. They are good  examples of creating environments that build on Agency. If you’re  invited to a retreat, you’re expected to be self-directing and  proactively make use of the opportunity. That’s why they rate a level 2  (Active Agency) on Agency.

Being  loose on structure has its benefits, but it also comes with problems.  Retreats rarely include systemic opportunities for needs to be  expressed. With the most pressing needs unsaid, a lot of time is wasted  in awkward social interactions and days of “getting to know you” before  the deeper knowledge exchanges take place.

Retreats  suffer when little effort is made to ensure that the group itself is  selected so that each participant meets the most relevant people  possible. Participants tend to only find someone great, right at the  end. If there’s no built-in way for each participant to share their  goals, then the retreat would rate as Ignorant (Level 0) on the  Responsiveness scale.

But  if the host takes the time to understand and diagnose each partipant  beforehand, they’ll be able to start everyone off with curated  introductions — that’d rate as Diagnostic Responsiveness, level 2.

What  talents do they have that are relevant to the rest of the group? How  can you use that talent to increase connectivity? A minimal amount of  effort is required for a retreat organiser to select people who will  gain from each other, so a level 1 (Selective) on the dimension of  Hyper-Connectivity.

A  first intervention to improve a retreat program would be to perform  active curation to balance the mix of participants with a diversity of  perspectives, and experiences. The mix of people then becomes the  ingredients for the learner to successfully achieve her outcomes.

To  improve Responsiveness, retreat hosts can help participants open up to  each other. This isn’t just about stating clear, logical learning goals.  For people to open up and share their dreams and fears, they have to  feel safe and welcome among their peers, in an empathic atmosphere and a  familiar culture. This way, it will take less time, and social  awkwardness to understand how to start meaningful exchanges with another  participant.

Lastly,  the implicit rule of a “retreat” is often to relax and socialise only.  But if a participant has a rare, face-to-face without someone who can  really help them, then rolling up their sleeves together on “some work”  might be what’s badly needed. Creating an environment that enables this  to happen, both practically and socially, provides a systematic deep  response to those critical learning opportunities.

Workshops

Workshops  tend to address a smaller audience than lectures, which allows the  lecturer to interact more. Well-delivered workshops feel more like  getting things done than just learning. The teacher can ask questions,  check in, adapt a bit. The learners questions improve because they’re  prompted from their experience, putting their learning into their  context.

Success  of the workshop depends on communicating the right topic, and by  adjusting it to the correct level at which people need the workshop  content. If workshops are not calibrated correctly to the learners  needs, the workshop facilitator will struggle for relevance. The  workshop content will be too advanced, too specific, or just not  relevant to current needs. In all the cases participants will be asking  themselves: “Why are we doing this today?”

So  how do workshops score as a peer learning format? Workshops are a great  example of the power of Responsiveness. Sometimes, workshops start with  a clear question: “what would you like to learn today?”. Other times,  it’s more subtle, as the teacher progresses, they ask questions, “Have  you heard of this? Has anyone done this before?”.

These  dynamics make workshops to be quite responsive. There is time, and  space to delve into specific questions that a learner might have.  Responsiveness of workshops is generally at Level 1 (Calibrated) because  the learners can request topics or changes in pace, even if their needs  aren’t diagnosed.

What  would happen if learners started requesting the specific content that  they needed, as they need it? When workshops are planned on a preset  path, they give little agency to the learner in terms of specifying the  topics covered. Learners do get to ask questions, but within the limits  of the preset workshop topic (set by either the program manager, or a  workshop facilitator; not the learner).

It’s  a missed opportunity when an expert practitioner can just work with a  learner, exposing really useful tips, ways of working and attitudes to  work in practice — none of which are visible in the artificial exercises  that usually get planned. Even so, workshops build confidence in  learners and enable them to become self-directed. They enable Agency, so  rate 1 (Enabling) in that dimension.

Workshops  aren’t hyper-connective since the only source of knowledge is the  teacher. So Hyper-Connectivity is at Level 0: unconnected. (When  learners work in groups, hyper-connectivity is sometimes improves  because they start to recommend new sources or tools to each other.)

Common  interventions to improve workshops are to increase Responsiveness by  being on the look-out for topics that emerge within a group of learners.  Ensuring that the workshops address those topics means that those  topics address the learners goals at right time when the learners  actually need those answers.

To  maximise Responsiveness, workshops need to shift from prescriptive to  diagnostic. Rather than plan specific content, they need to plan to  bring in the learners’ projects and challenges, and educate around  those. That can be as simple as planning for learners to “Show &  Tell” their project to the class, followed by the workshop expert  working with them on it.

Startup Accelerators

Startup  accelerators typically support a set of startups with similar goals and  a similar stage. They lay out a 3-month program that includes a  pre-defined educational path with workshops on general startup topics  like raising investment, product design, learning from customers, and  marketing.

Accelerators also provide a group of mentors, who support the startups with advice and connections.

Accelerators  tend to score Level 1 (Calibrated) on Responsiveness. They constantly  check in with the startup teams to monitor progress, and ask about their  big questions. But they tend to be prescriptive about the challenges  each company faces, and if there are individual diagnostics, those are  done by individual mentors who have little say in redirecting the  program. Fundamentally, accelerators schedule their educational programs  before selecting the startups they support.

Acclerators  typically respond by brokering a few select connections and  introductions for startup teams via their network. Those select  connections take time, so we can rate accelerators level 1 (Selective)  on Hyper-Connectivity. What if you started a culture of mentorship that  encouraged referrals? Who would be attracted if you opened up parts of  your program to outsiders?

Agency  is constrained to the goals of the accelerator program, because it will  only respond within the boundaries of the program’s predefined goals  (Level 1: Enabling). If the accelerator is funded by investors, the  learners goals are constrained towards another investment. If it’s  funded by a corporation, their goals are constrained within that  corporation’s strategy.

Because  of the politics exerted by the investing partners behind accelerator  programs, accelerator directors often feel an urge to increase control  and keep their startups on the program’s target, rather than  relinquishing further control to the teams.

There  is an inherent conflct in increasing control in accelerator programs.  It turns them into compulsory schooling exercises rather than company  leaders who set their own agenda. When capable founders start to  take charge and control their time, they’re treated as delinquent  students and berated for truancy.

Despite  the behavioral tendency for control in accelerator programs, many  accelerator program managers, and directors recognise the inherent  conflict. They see that it doesn’t make sense to plan content for the  duration of the program, since the immediate needs of each startup will  be different.

An  approach that can cope with the diversity of stages, and emergent  nature in the development paths of startups helps here. An easy step is  to put placeholder dates in the program for workshops, dinner talks, and  other events. Scheduling repeating placeholders makes logistics easier,  since there’s no need to constantly schedule events.

Scheduling  interactive events, like AMAs (ask me anything), fireside chats and  dinner talks, allows for the founders to choose topics. It also allows  the accelerator to invite non-educators with more relevant experience to  drop in easily. Clarity on them is created through calibration. Gauging  progress with startups, sharing notes on them in a peer review setting,  facilitating mentors to share their observations on progress, as an  outside opinion, etc. These are all small acts of responsiveness that  allow for more tailoring to what the teams actually need.

Barcamps

Barcamps  are open conferences, usually hosting several hundred people, that  allow the participants to define the schedule. They follow a format  called Open Space, which starts with everyone meeting in front of an  empty schedule on a board. Everyone has the opportunity to present an  idea of for a conference session in front of everyone, and add it to the  schedule. Barcamps have gone through a global popularity wave, having  been run in over 350 cities and the largest attended by 6,400 people.

The  session topics are always specified by the participants at Barcamps,  and they tend to be more conversational than typical presentations. For  the newcomers that don’t know how to catch the waves of opportunity at  the conferences, you’ll see the more experienced participants guide new  participants. Experienced barcampers know how to make the most of it,  and newcomers quickly figure it out, so Barcamps get full marks for  Agency (Active).

To  facilitate the definition of the topics that are brought forward by the  participants, Barcamps usually supports the learning community with  tools for participants to prepare upfront, before the event. This  tooling consists of wikis, discussion forums, and email lists, where  people can share their thoughts for topics, and discussions. These all  serve participants to take control over their agenda at the Barcamp  event. It makes the program highly flexible, and Responsive (Level 2:  Diagnostic)

Although  most Barcamp events are modest in size, they achieve similar results to  much larger conferences. For one, they tend to attract socially-minded  technologists, who by their nature are extremely hyper-connective. The  tech community has a culture of “making intros” without asking for  anything in return. This category of people also attracts a constant  inflow new people to the community.

At  first Barcamps are overwhelming, because there are so many parallel  sessions running, and walking into any of them usually means finding a  room of helpful, like-minded people in mid-conversation. But this allows  people to identify relevant new connections with people working towards  similar goals but with different approaches, and form meaningful  relationships with them. It all makes for a smoothly flowing swarm of  people, and many leave the conference with relationships that they know  will last them a lifetime (Fully Hyper-Connected Level 2: Instant).

Try it

With  this understanding, the PLP should give you a way to evaluate your own  program, spot weaknesses and disconnects, and reveal what to do about  them.

We’d  love to see you try, and talk to you about your evaluation and ideas.  Please share yours with us, so we can understand how the PLP works, and  give you some pointers if you’d like. (The dimensions you need to run  your evaluations have been defined in detail in an earlier post titled “What is Peer Learning?”).


If you’d like to read more as it comes in, please sign up to the Source newsletter. We’re sharing as we write!

Share twitter/ facebook/ copy link