The flight from Amsterdam to Jakarta is 16 hours, but it feels even longer when you know you have to fly right back. I was on the hook to present a research project on growing bell peppers to the Minister of Agriculture.
I was expected to renew its funding.
Except that the 600,000 EUR pilot project only managed to get 2 farmers involved.
Except that the university’s own teams were fighting and dodging the blame for the reason the project sold exactly zero bell peppers.
That’s why, one day beforehand, they’d dumped this hand-off on me, a junior researcher.
The technology worked. Automated greenhouses that grow things faster, better, without chemicals. But Dutch agriculture technology always works in Holland. The government invests a lot in exporting Dutch agricultural technology, because what works here doesn’t always work there. And making it work in other places requires more than just engineering.
Indonesia is 2 million square kilometres (roughly the size of Greenland) with a population of 250 million, and half their economy is agriculture.
The pilot had some wins to show on the engineering end; it worked for growing bell peppers. On top of that, it made a great case for investment, at least on paper. Greenhouses are a big up-front cost, but because bell peppers are a delicate crop with a limited shelf life, they promise high margins.
Except that Indonesians don’t eat bell peppers.
Except that Indonesia’s underdeveloped transport, and distribution infrastructure are a handicap, even for chilli peppers, which they already grow and use. Even though Indonesia’s climate is great for growing chilli peppers, they still need to import them.
My job was to say none of these things.
Indonesia has formal protocols for handing over projects between governments. This one would be handed over at a closing ceremony in the Ministry in Jakarta.
A U-shaped table in a formal meeting room would be setup, with seats assigned with name tags, and a jar of water and breath mints to be shared by each pair of attendees.
The Minister would kick-off the meeting with a welcoming, followed by the representing foreign diplomat, in this case the Dutch agricultural counsellor, with some words on the strength of collaboration between countries.
Every stakeholder that is responsible for the project would be present in the meeting minutes, but only the most senior representatives would speak, and usually they're not too involved in the project. The discussion would be kept at a high level, to remove the potential sting of contradictory details.
Then the project leader would present some of the activities, and selected results. In his wake, the 2 farmers who participated in the pilot project, would say a word of gratitude about the grant they received.
After the formal discussion ceremony, the Minister would leave for a next obligation.
For the rest of the participants there would be an informal walk-about exhibit, showing the tangible technology accomplishments of the project.
The higher goal of a formal hand-over ceremony like this is to empower the Minister to make a call on the future of the project discretely, later, outside of the room in which the ceremony dynamics take place. There are often diplomatic stakes tied to field projects, which need their own time and place behind closed doors to be settled. If all the relevant people are there for the presentation, the information that was shared is deemed to be legitimate, even if they didn’t get a chance to speak or ask questions.
This creates insurance for the Minister’s decision later. Anything that might backfire can be pointed back to the responsibility of any of the other stakeholders that were present.
This also means everyone knew that no hard questions would be asked. Which is why in spite of the fact that the pilot didn’t work, the university expected the project funding to be extended.
I had been briefed on all of this, and told exactly how much was riding on successfully extending the funding. This could mean another 600,000 EUR to keep 5 of my colleagues working for 4 more years. Or not.
The first clue that the Minister wouldn’t show up came on the agenda that was circulated the day before. There was an asterisk beside his name. But he appointed someone from the ministry in his stead, and the ceremony proceeded.
No difficulties were mentioned during the presentations. Throughout the polite exchanges, and even in the informal discussions afterwards, raising them would break protocol and instructions from the university.
After the ceremony, we all shared goat satay and a few glasses of Bintang beer, and I was on the flight home.
The Minister decided to extend the project for another 4 years, with the explicit request to scale up adoption of the greenhouse technology for bell peppers.
Eight years later, the project continues in some other form, though now it doesn’t grow bell peppers or use greenhouses.
Learning on agenda doesn’t mean learning happened
The example above seems like an extreme case, but it’s actually common. It’s not just formal procedures — the same oversights happen at conferences, universities, startup accelerators. Even the interactive sessions fall into the motions of energetic discussions and activities, but still fail to make an impact.
To help educators catch themselves from making these mistakes we've created an evaluation tool for the effectiveness of a learning environment, in order to make well-founded choices for what to do next.
The ARC is based on the 3 characteristics of Peer Learning Programs: Agency, Responsiveness and Connectivity. Here's a quick recap:
Agency — Invocation of self-direction in learners.
Agency evolves from a push by the educator to shape the learner’s mindset, enabling them start self-directing. Then, it shifts into a supporting role, where it catalyzes their agency to accelerate them in the directions they choose.
Responsiveness — A learning environment's ability to systematically assess learners' needs and to provide relevant knowledge on demand.
Programs start by becoming more responsive when the education content is calibrated to learners’ needs. Ultimately, responsiveness takes form as a diagnosis through mentoring conversations with the learner.
Connectivity - Access provided to specific knowledge sources, especially those beyond the learner’s network.
This starts with brokering connections on the learner’s behalf to expand their reach, and evolves into creating environments where they are hyper-linked can make those connections instantly by themselves.
These are interrelated
Systematically connecting learners to others on their path, only makes sense when they have a path. That means they have Agency within their learning environment, but also that the program itself is designed to listen and respond. So Connectivity depends on a foundation of Agency and Responsiveness.
By assessing an education program on each of these characteristics, it helps us see if we’re going through the motions without effect.
Responsiveness is always a good place to start an evaluation. It is a telling part of the assessment showing how a program tailors to the learner, and responds as they progress. Completing the evaluation from there, by going through the other characteristics will make the assessment fall into place. As a whole the evaluation helps to determine if relevant learning actually takes place, and if not, how, and where to fix it.
Detect then respond
The professor from Tanzania fell asleep in the front row, with his video camera aimed at me from a mini-tripod on his desk.
It was my first day teaching in Africa, the start of a week of entrepreneurship workshops in Pretoria. Our UK NGO client had raised funding for an innovation prize for African engineers, called The Africa Prize For Engineering Innovation. A few months before, they'd started recruiting a training provider, asking for proposals based on a topic checklist they'd used in their London startup program.
They'd gotten in touch with me because I'd been actively researching technology support programs around Africa. Some directors there, in Nairobi , Harare and Accra, had already explained to me that the typical startup topics weren't so relevant there. But when an NGO makes call for proposals, your proposal has to bid against their specifications or it will be disqualified. So I played ball, and I was hoping to re-assess the curriculum if we were selected. In the end, the NGO wanted the same topics, but agreed to treat them as discussions more than lecture content.
So fifteen engineering entrepreneurs from all over the continent were flown into a South African business park. Hearing about their work made me feel energized: an electronic payment system from Nigeria, affordable toilets in rural Uganda, nano-tech water filters in Tanzania, a mobile money exchange service from Kenya, a rock-crushing machine for small mines, an app for teaching kids to read in Shona, a native Zimbabwean language...
While the NGO manager worked from her laptop at the back of the training room, I started by explaining this caveat to the engineers: "These are state-of-the-art entrepreneurship techniques in the US and Europe, so let's talk about them and what works here."
The small conference room was professional and comfortable, with air conditioning and blinds drawn to avoid the sun and its blazing heat off the terrace outside.
People were engaged in the discussion, but one closed-eyed face in the front row slowly bounced, dropped, bounced, dropped. He seemed to have been prepared for this though, since he'd started the day by setting up his video camera. He explained that he had to work through nights, both on his business and his responsibilities in his university, so his plan had been to listen to the recorded lectures at night while he worked. It turned out everyone had a lot more on their plate than their engineering business...
At lunch, one of the engineers took me aside:
"I'm only telling you this because you're the first foreigner I've seen who started by asking what works here. Everyone here will be polite, but you won't get direct feedback. Nobody will want to offend the funder, because we know if we do, we don't have a chance at winning the grant."
Over that week, I made time for long one-on-one meetings with each engineer to understand them. It became clear why Silicon Valley best practices, like interviewing potential corporate customers to research their needs before designing a product, had different cultural connotations:
"Everybody's always talking, and talk isn't serious. These people are busy, and if you get a meeting with them, they expect you're prepared with something good to offer. Startups aren't considered innovative, they're considered small and unreliable. I tried what you describe, getting meetings to learn about their needs, and soon rumours spread that I was a time-waster."
Or when sharing how VC funding worked in London and Silicon Valley, people were sceptical of equity funding. It turned out most of them could raise more money from African grants than they could from a London seed investment, so why give up a piece of their company so soon?
It makes sense to build a customer email list, unless your customers never use email but always use Whatsapp.
It makes sense to prove an idea by getting a few paying customers, even investors, and only then take a risk on it. But if your parents and whole family survive from your salary every month, then you'll see the risk of leaving your job very differently. When is a sure thing really a sure thing?
Evaluating common approaches
To demonstrate how the ARC helps us spot these situations, let’s use it to evaluate a few different educational formats:
Lectures and lecture series' are like riding a train. Sit back, get comfy, and cast your gaze out the window. The journey is set; let's hope it's interesting.
Lecture-based programs excel at topics where the learner needs little input into the direction of their learning journey. Lectures typically don’t sense learners’needs or adapt to them, connect them to other experts than the lecturers, and give little or no agency to the learner regarding the topics.
For topics and disciplines that don’t change much, the repetitiveness of this form let's lecturers improve their delivery through practice, and polish their content over time.
People tune out when lectures don't feel relevant. This is the cause behind lecturers concerns about creating engagement in their classes.
When classroom approaches include interactivity, experiential learning and discussion, they help stimulate the learner and make the learning experience more effective, but everyone is still on the same train on the same track. The curriculum is set.
When lecturers start by asking the students questions - either about what they want to learn today, or assessing them with a show of hands - the educator is reforming the plan according to learner needs. This is a form of Calibration, so on the ARC, we can rate Responsiveness at Level 1 (Calibrated) in these cases.
However, lectures don’t enable learners to decide how they would like to achieve their learning goals, so we rate them at Level 0 (No Agency) for Agency. They also don’t involve other experts to address these needs, so we evaluate them as Unconnected — Level 0 on the Connectivity scale.
We can see why even great lectures leave something to be desired from the learners perspective. We can also how to improve them. Quick wins lie in becoming more responsive; what about starting with a session where learners share their biggest questions about the upcoming topic? Those reveal struggles and issues up-front, so lectures know where to detour or slow down in advance. Or, if the class is too big for that, quick online survey to participants before the lecture can give similar insight. If a few students have relevant projects, why not bring them to the podium and work with them or coach them, for all to benefit?
All these types of actions will help the lecturer to collect an impression of what’s on the mind of learners as a form of preparation, and put emphasis on those particular topics.
The idea behind the panel discussion is to get a good conversation going between experts, to flush out the most relevant topics of the day. The reality is that panels tend towards being a set of small talks, with each panellist pushing a pre-determined agenda. A lot depends on the preparation of the moderator, but their research tends to be on the panelists and not on the audience, so their interjections risk feeling irrelevant to the crowd. For all of their potential, and all the hard work and good intentions that go into them, panel discussions at conferences tend to get most scorching commentary on social media and other backchannels. Let's look at why.
Many panel moderators don’t like to involve the audience, and they have good reason. Audience questions are often irrelevant, or they “like to state a point related to the topic, rather than asking a question”. Audience involvement is seen mostly as a distraction, and because of it, panel discussions restrict learner participation.
Let’s look at panels from the learners perspective. Panel discussions do tend to promise interesting topics from experts worth learning from. Learners can choose which panels to attend. So there's an attractive potential of relevance and agency to live up to.
But the panel format leaves learners powerless to express themselves, or direct the discussion to something meaningful to them.
This is frustrating for learners. So when it’s question time, both the frustration and the built-up pressure to try to learn something relevant, turn Q&A’s into a different phenomenon - a spouting pressure release for unanswered questions and self-expression. The problem isn’t the audience, it’s that they haven’t been given a voice except for a short question right at the end, but by then the panel topics have diverged far from the what's relevant to the audience.
It's frustrating for panellists too. They’ve been put in a format that appears to be conversational, but is actually quite slow and constraining. When panels get boring, a typical reaction is to turn them into a debate to create some dynamism and energy. But that can lead to a battle of generalisations, where panelists assume different contexts or get forced to defend straw-man arguments.
How do panels score on the ARC? Well, they fail to make a mark on Connectivity, because they don’t facilitate the creation of relationships with the learners (Level 0, sorry).
Do the learners have the opportunity to specify the topics of discussion, other than asking a question? Does the learner have any say in how to interact with the expert? No to both, hence zero’s all around for Responsiveness (Ignorant), and Agency (No Agency).
Targeted calibration on questions that learners have create more engagement. Could you find specific challenges amongst them to be discussed by the panel?
A few conferences flip the panel discussion into a "Positive Shark Tank" where a few interesting learners' challenges are presented to them, and panelists then compete or build on each others' advice.
Others take questions early, and a moderator picks themes from them to direct the discussion.
Local people in the audience understand local needs. Novices in the audience understand novice needs. Putting them in the mix creates unity amongst the interests of all participants, and provides a more constructive learning context overall.
Panels will benefit a lot by improving Responsiveness - identifying and including representatives of groups in the audience, by starting with calibrating with the audience, or by using the panel as an opportunity to diagnose peer learners. These steps make them far more relevant, rewarding and engaging.
If you’ve ever had the opportunity to experience a conference of thousands or tens of thousands of people, you’ve probably left with a sense of awe or excitement. Large conferences are a unique type of learning experience because of who you can meet. They attract a broad, diverse group of people with some common interest. The potential is in the people you meet, who you’d probably never discover otherwise.
In order to deliver on the promise of bringing people together, conference organisers need to attract a crowd. The way they draw in the crowds is by committing big name speakers to the event.
For the organisers, The success of a conference is usually fragile. If the talks are bad, the participant feedback for the conference as a whole will reflect that. So best include more talks so there are more good ones. If a lot people leave without making great connections, the feedback will reflect that too.
Conferences hold a promise of connections, and the more responsive they are, the more consistently they do this. The freedom to shape the agenda is an issue. How can you give more control to the participant to shape their own program at the conference? The more people can choose the banners they congregate under, the more the environment helps them meet relevant new people. The ARC reveals where this risk to success of a conference comes from.
Because so many people with common interests and challenges are drawn together, the Connectivity potential of a large conference is Hyper-linked, with a maximum rating of 2 — there could be almost no delay for the learners to find highly relevant people to their specific learning goals. But, because conferences are designed around the speakers, this is where misalignment with the learners’needs happens.
Conference schedules are usually heavy on talks, and light on social interaction. The physical space also induces people to find a seat in a dark lecture hall, listening to a speaker, not engaging with like-minded people. As a result, peoples’ time allocation for the conference is imposed on them.
The content for the talks themselves are also set by the individual speaker, not the learners. This centralised control over the setup of the program make Large Conferences ignorant of learners' needs (Responsiveness Level 0).
Large Conferences enabling agency in learners — to choose which talks to attend (or not). They don’t allow the learner to specify, direct topics, but still push the learner to curate their own learning in some way. (Agency Level 1: Initiating)
So the promise of Connectivity is not supported by a foundation. The paradox of the conference is that all the talks actually prevent people from meeting.
This foundation exists at conferences that, by design, provide a way for people to share their agendas, those that don't cater to speakers who want to give the same talks everywhere, and those that loosely structure social interactions around common interests.
A simple, effective starting point is for conferences to encourage shorter talks and make it convenient for speakers to prepare their slides at the last minute. They're usually not being lazy, but using the time to understand who'll be in their audience. Another simple change is to light the audience, encouraging the speaker to interact, or at least respond to facial expressions in the crowd.
Still, these only go so far, and for the full foundation to be built, allowing for instant connectivity for everyone present - that requires, using the extensive knowledge among the experts to diagnose the needs of others, and for the learners with active agency in their own work to have control over at least part of the conference agenda.
Rather than fewer, bigger, darker rooms, conferences that shift towards more, smaller, comfortable rooms take big steps towards this. Those types of conferences naturally allow experts to become known, diagnose the needs in the room, and for peers to find each other.
Over the last decade, the conference business in many topics has shifted towards mega-conferences that compete on sheer size, rather than content or quality. A response to that is the emergence of smaller retreats, where a group of less than 20 people are chosen to meet in a comfortable, secluded environment for a few days.
Retreats are usually quite unstructured, allowing the participants to bring their own agendas and start meaningful conversations. They are good examples of creating environments that build on Agency. If you’re invited to a retreat, you’re expected to be self-directing and proactively make use of the opportunity. That’s why they rate a level 2 (Catalyzing) on Agency.
Being loose on structure has its benefits, but it also comes with problems. Retreats rarely include systemic opportunities for needs to be expressed. With the most pressing needs unsaid, a lot of time is wasted in awkward social interactions and days of “getting to know you” before the deeper knowledge exchanges take place.
Retreats suffer when little effort is made to ensure that the group itself is selected so that each participant meets the most relevant people possible, as early as possible. Retreats with no built-in way for participants to share their goals, then rate as Ignorant (Level 0) on the Responsiveness scale.
But if the host takes the time to understand and diagnose each participant beforehand, they’ll be able to start everyone off with curated introductions — that’d rate as Diagnostic Responsiveness, level 2.
What talents do your retreat participants have that are relevant to the rest of the group? How can you use that talent to increase connectivity? A minimal amount of effort is required for a retreat organiser to select people who will gain from each other, so a level 1 (Expanded) for Connectivity.
A first intervention to improve a retreat program is to perform active curation to balance the mix of participants with a diversity of perspectives, and experiences. The mix of people then becomes the ingredients for the learner to successfully achieve her outcomes.
To improve Responsiveness, retreat hosts can help participants open up to each other. This isn’t just about stating clear, logical learning goals. For people to open up and share their dreams and fears, they have to feel safe and welcome among their peers, in an empathic atmosphere and a familiar culture. This way, it will take less time, and social awkwardness to understand how to start meaningful exchanges with another participant.
Lastly, the implicit rule of a “retreat” is often to relax and socialise only. But if a participant has a rare, face-to-face without someone who can really help them, then rolling up their sleeves together on “some work” might be what’s badly needed. Creating an environment that enables this to happen, both practically and socially, provides a systematic deep response to those critical learning opportunities.
Workshops tend to address a smaller audience than lectures, which allows the instructor to interact more. Well-delivered workshops feel more like getting things done than "just learning". The teacher can ask questions, check in, adapt a bit. The learners questions improve because they’re prompted from their experience, putting their learning into their context.
Experienced workshop instructors usually start by checking in with their students, so they can adjust it to the right level of advancement.
So how do workshops score as a peer learning format? Workshops are a great example of the power of Responsiveness. Sometimes, workshops start with a clear question: “what would you like to learn today?”. Other times, it’s more subtle, as the teacher progresses, they ask questions, “Have you heard of this? Has anyone done this before?”.
These dynamics make workshops to be quite responsive. There is time, and space to delve into specific questions that a learner might have. Responsiveness of workshops is generally at Level 1 (Calibrated) because the learners can request topics or changes in pace, even if their needs aren’t diagnosed.
What would happen if learners started requesting the specific content that they needed, as they need it? When workshops are planned on a pre-set path, they don't allow for learners with agency. Their questions can create useful detours, but the content still finds its way back to a preset path, which is outside of the control of the learners.
Workshops build confidence in learners and enable them to become self-directed. They enable Agency, so rate 1 (Initiating) there. There's only one source of knowledge, the instructor, so Connectivity is at Level 0: unconnected.
Common interventions to improve workshops are to increase Responsiveness by being on the look-out for topics that emerge within a group of learners. Ensuring that the workshops address those topics means that those topics address the learners goals at right time when the learners actually need those answers.
Some workshops rely on active agency, recognising the opportunity for an expert practitioner to just work with a learner. “Show & Tells” are an example of this style of workshop, where learners present their project to the class, followed by the workshop expert working with them on it. These
expose useful tips, ways of working and attitudes to work in practice — none of which are visible in the artificial exercises that usually get planned. This approach also moves the responsiveness to diagnostic, since the instructor goes deep with learners on their projects.
Startup accelerators typically support a set of startups with similar goals and a similar stage. They lay out a 3-month program that includes a pre-defined educational path with workshops on general startup topics like raising investment, product design, learning from customers, and marketing.
Accelerators also provide a group of mentors, who support the startups with advice and connections.
Accelerators tend to score Level 1 (Calibrated) on Responsiveness. They constantly check in with the startup teams to monitor progress, and ask about their big questions. But they tend to be prescriptive about the challenges each company faces, and if there are individual diagnostics, those are done by individual mentors who have little say in redirecting the program.
Accelerators typically respond by brokering a few select connections and introductions for startup teams via their network. Those select connections take time, so we can rate accelerators level 1 (Expanded) on Connectivity.
If the accelerator is funded by investors, the learners goals are constrained towards maximizing a return on that investment. If it’s funded by a corporation, their goals are constrained within that corporation’s strategy.
Agency is constrained to the goals of the accelerator program, because it will only respond within the boundaries of the program’s predefined goals (Level 1: Initiating).
Some accelerators, particularly those run by novices, treat them as compulsory schooling exercises, which limits the agency of the founders they support. When capable founders start to take charge and control their time and run their business, they’re treated as delinquent students and berated for truancy when they don't participate in the heavy training program..
Others recognise that it doesn’t make sense to plan content for the duration of the program, since the immediate needs of each startup will be different.
An approach that can cope with the diversity of stages, and emergent nature in the development paths of startups helps here. An easy step is to put placeholder dates in the program for workshops, dinner talks, and other events. Scheduling repeating placeholders makes logistics easier, since there’s no need to constantly schedule events and promote them to the startup teams.
Scheduling interactive events, like AMAs (ask me anything), fireside chats and dinner talks, allows for the founders to choose topics. It also allows the accelerator to invite non-educators with more relevant experience to drop in easily. Clarity on them is created through calibration. Gauging progress with startups, sharing notes on them in a peer review setting, facilitating mentors to share their observations on progress, as an outside opinion, etc. These are all small acts of responsiveness that allow for more tailoring to what the teams actually need.
Barcamps are open conferences, usually hosting several hundred people, that allow the participants to define the schedule of topics in every room. They follow a format called Open Space, which starts with everyone meeting in front of an empty schedule on a board. Everyone has the opportunity to present an idea of for a conference session in front of everyone else, and add it to the schedule. Barcamps have gone through a global popularity wave, having been run in over 350 cities and the largest attended by 6,400 people.
Barcamps sessions tend to be more conversational than typical presentations. The Barcamp culture emphasises including people and de-emphasises social status, and they also favour distributed responsibility - so participants take charge and make improvements or fixes as they feel necessary. Compared to a typical conference, they clearly lack centralised control, and usually feel more "community" than "polished." This takes some adjustment for first-timers, who are usually spotted by someone more experienced and guided along. Experienced barcampers know how to make the most of it, and newcomers quickly figure it out, so Barcamps get full marks for Agency (Catalyzing).
To facilitate the definition of the topics that are brought forward by the participants, Barcamps usually supports the learning community with tools for participants to prepare upfront, before the event. This tooling consists of wikis, online forums, and email lists, where people can discuss their ideas for sessions. These all serve participants to take control over their agenda at the Barcamp event. It makes the program highly flexible, and Responsive (Level 2: Diagnostic)
Although most Barcamp events are modest in size, they achieve similar results to much larger conferences. For one, they tend to attract socially-minded technologists, who have a culture of helping others and “making intros” without asking for anything in return.
At first Barcamps are overwhelming, because there are so many parallel sessions running, and walking into any of them usually means finding a room of people in mid-conversation. As each new session begins, a new group forms around similar goals but with different approaches, and meaningful relationships form from them. It all makes for a smoothly flowing swarm of people, and many leave the conference with relationships that they know will last a lifetime (Fully Connected Level 2: Hyper-linked).
Responsiveness, Agency, Connectivity. There's an interdependency between these characteristics.
There is no set path by which a program evolves but generally, Connectivity depends on Agency and Responsiveness.
For example, running a massive conference doesn't help people forge great relationships if they have no good way to share their interests, find each other and meet in comfortable places. Bringing in a world-renown expert to lecture doesn't help if people believe their approach isn't right for the local context.
Peer Learning communities tend to develop through 3 stages:
- Inertial - Programs that can calibrate to learners needs and enable agency move learners from a static, passive state to an active level of engagement.
2. Accelerating - Once learners become more self-directing, programs build on their agency, perform deeper diagnostics and recruit relevant support. This accelerates their progress.
3. Leaping - The final step is to offer instant access to the most relevant experience based on the key challenges in their projects. This allows learners to leap over months or even years of challenges they'd otherwise face.
With this understanding, the ARC should give you a way to evaluate your own program, spot weaknesses and disconnects.
With this understanding, the ARC should give you a way to evaluate your own program, spot weaknesses and disconnects, and reveal what to do about them.
We’d love to see you try, and talk to you about your evaluation and ideas. Please share yours with us, so we can understand how the ARC works, and give you some pointers if you’d like. (The dimensions you need to run your evaluations have been defined in detail in an earlier post titled “What is Peer Learning?”).
If you’d like to read more as it comes in, please sign up to the Source newsletter. We’re sharing as we write!