_Published: Oct 3, 2025_
# **Introduction**
Communities of Practice (CoPs) (a.k.a. Practitioner Networks) are groups of people who learn together by sharing experiences and knowledge around a common domain or challenge. In the field of climate adaptation and resilience – for example, in energy adaptation or nature-based solutions – CoPs play a vital role in spreading innovations and aligning efforts across diverse stakeholders. A key ingredient for a thriving CoP is _common knowledge_: the set of understandings, facts, and assumptions that all members share and know that others share as well.
Common knowledge provides the “background of intricate, recursive assumptions” that make communication and coordination possible [^1]. When everyone is on the same page about core concepts and goals, collaboration becomes smoother, trust grows, and the community can act more cohesively. This white paper defines common knowledge in the CoP context and introduces _focal points_ – salient markers that communities rally around – as catalysts for building that shared understanding. It then presents a practical framework of metrics and indicators to assess how well a CoP is cultivating common knowledge, with real-world examples from energy transition and nature-based solution communities. Finally, it offers actionable guidance on data sources and evaluation strategies (from surveys to digital trace analysis), and a closing checklist for facilitators and funders to diagnose the state of common knowledge in their CoPs.
# **Defining Common Knowledge in Communities of Practice**
In everyday terms, common knowledge in a community means “we all know this, and we all know that we all know it.” Formally, it’s knowledge that everyone in the group shares and is aware that others share (a concept rooted in epistemic logic)[^2]. Steven Pinker emphasizes that such mutual knowledge is the _engine_ of social life: it allows people to make sense of each other’s words and actions and “consolidates the mutual trust intrinsic to social cooperation”. In a CoP, common knowledge can include shared definitions (e.g. what “resilience” means), familiar reference points (like a well-known case study or tool), and unspoken assumptions or values that underpin the practice. Etienne Wenger, who coined the CoP concept, would recognize common knowledge as part of the community’s “shared repertoire of resources: experiences, stories, tools, ways of addressing recurring problems – in short, a shared practice” [^3]. This shared repertoire does not form overnight; it “takes time and sustained interaction” as members engage in joint activities and learn from each other.
Why is common knowledge so important in CoPs? One reason is that it makes tacit knowledge sharing easier and more effective. When members have common ground and aligned mental models, they can interpret each other’s ideas with insight into each other’s implicit assumptions . A study of an online/offline librarian CoP (KMaya) found that “the existence of common knowledge and a shared system of values” gave members insight into the values embedded in each other’s knowledge, easing the transfer of know-how and fostering a richer shared knowledge base[^4]. In practical terms, members don’t have to constantly explain basic concepts or worry that messages will be misunderstood – a huge benefit when dealing with complex topics like climate science or sustainable energy. Common knowledge thus underpins the trust and efficiency of collective learning. However, it can also be a double-edged sword: Pinker notes that the same common knowledge that enables “intelligible exchange” can also complicate it (for instance, if everyone knows that an issue is sensitive, they might tiptoe around it). The goal for CoP facilitators is to cultivate healthy common knowledge – a _productive_ common ground of facts, terminology, and values that support open dialogue and coordinated action, without becoming an echo chamber that stifles new ideas.
## **Focal Points: Markers and Catalysts of Common Knowledge**
How do communities actually establish what becomes common knowledge? This is where the idea of _focal points_ comes in. In game theory, Thomas Schelling introduced focal points (also known as Schelling points) to describe solutions people converge on by default, without formal agreement, because the option somehow stands out as natural or salient[^5]. Schelling observed that people can often coordinate their expectations “if each knows that the other is trying to do the same” – they will gravitate toward a prominent choice that they _expect_ others will also choose. In a classic example, when strangers are asked to meet in New York City without communication, many intuitively choose “noon at Grand Central Terminal” because it’s an obvious landmark. In the context of a CoP, focal points are the **shared reference markers** that community members naturally rally around, helping to align their knowledge and efforts even without top-down instruction.
Focal points in communities of practice can take various forms:
- **Key Concepts or Narratives:** A compelling concept can serve as a focal point if everyone recognizes its importance. For example, in a nature-based solutions CoP, the idea of “building with nature” might become a unifying mantra. Similarly, many climate resilience networks have adopted the concept of _“just transition”_ as a focal narrative that encapsulates shared values (social justice in climate action). When such a concept is widely embraced, it becomes common knowledge that “this is what we’re collectively aiming for.”
- **Standard Frameworks or Tools:** A community often converges on certain frameworks as focal points for practice. A good example is the **IUCN Global Standard for Nature-based Solutions**[^6], a framework of 8 criteria that has been broadly adopted by practitioners worldwide. By training all members on this standard, a CoP ensures a common baseline understanding of what qualifies as a nature-based solution and how to measure its quality. The standard itself acts as a focal artifact – a _reference everyone knows_ – which catalyzes common knowledge. In energy transition communities, a comparable focal point might be a well-known scenario or guideline (e.g., a national “Net Zero by 2050” roadmap) that everyone refers to when planning projects.
- **Shared Events and Rituals:** Regular gatherings can be focal points in time that concentrate knowledge exchange. A recurring annual workshop or monthly webinar series becomes a marker in the community’s rhythm where members align on updates and lessons. For instance, the Gulf of Mexico Climate Community of Practice centers its activity around an annual workshop rotating through states . These events, along with interim webinars, serve as focal moments when latest science, tools, and best practices are shared, creating common reference points for members . Over time, attendees come to expect that “if I go to the yearly meeting, I’ll hear the _current_ common knowledge we all need.” Having a predictable cadence of focal events helps reinforce and refresh mutual understanding.
- **Knowledge Artifacts:** Communities often produce tangible outputs that become focal points for common knowledge. A striking example comes from the Gulf of Mexico Climate CoP, where members realized the importance of delivering _common messages_ about climate impacts. They collaboratively created a one-page “5 Things You Need to Know About Sea Level Rise” card to disseminate to local stakeholders . This simple artifact became a focal point: it distilled the core facts (common knowledge) the community agreed everyone should know, and it was used widely as a trusted reference. The process of co-creating it was as important as the product – it signaled that the group had achieved consensus on key messages. In general, co-authoring guides, FAQs, glossaries or case-study collections can both mark and fuel common knowledge. These resources function as _anchors_ that new and old members alike can point to and say, “this is what we collectively know.”
By deliberately identifying and nurturing such focal points, facilitators can accelerate the formation of common knowledge. Focal points act as **catalysts** by making certain knowledge highly visible and accessible in the community. They also serve as **markers** – if a CoP has robust focal points (shared jargon, go-to tools, flagship stories), it’s a sign that members have achieved a level of common understanding. Moreover, focal points help newcomers quickly grasp the community’s knowledge landscape (they highlight “start here; this is important to all of us”). As Pinker observed, even rituals or conventions can create common knowledge by publicly signaling shared values . In an energy transition CoP, for example, a regular practice of beginning meetings with a success story from the field might become a ritual that reinforces common ground – everyone starts to know the canonical success stories and draws inspiration from them.
However, one must be mindful that focal points carry _prominence_ largely by the community’s tacit agreement. It’s important to choose focal points that genuinely resonate with members. A focal point that is imposed but not embraced (say, a jargon term or tool that people find unhelpful) won’t effectively build common knowledge. Co-creation and feedback are key – when members themselves highlight what ideas or resources they find most salient, those are likely candidates for true focal points.
# **Metrics and Indicators for Common Knowledge in a CoP**
To manage and support common knowledge in a community of practice, facilitators and funders need ways to _assess its degree and quality_. Below is a framework of practical metrics and indicators, organized by aspect of common knowledge. These can help answer: How much common knowledge do we have? Is it improving? And is it the _right_ common knowledge?
### **1. Shared Language and Definitions:**
One indicator of common knowledge is the consistency of terminology and understanding across members. Metrics could include analyzing community communications (meeting transcripts, forum posts, documents) for use of key terms. High common knowledge is indicated by widespread use of shared vocabulary with the same meaning. For example, in an adaptation CoP, if everyone uses the term “NbS” (Nature-based Solutions) in a similar way, it suggests alignment. A more formal metric might be a **glossary adoption rate** – e.g. the percentage of active members who correctly identify or define the community’s top 10 jargon terms (measured through a quiz or survey). If disparities are found (multiple interpretations of a core term), that flags an area to build common understanding.
### **2. Core Knowledge Awareness:**
These indicators measure whether all members are aware of certain foundational facts or resources. Surveys or self-assessments can be used periodically: for instance, ask members if they know about important documents or concepts (like a relevant policy, a standard, or a famous case study in the domain). You might establish a checklist of “common knowledge items” and track what proportion of the community reports familiarity with each. In an energy transition CoP, this could include items like “awareness of the national renewable energy targets” or “knowing the success story of X community’s solar project.” If 90%+ of members recognize those references, that’s strong common knowledge; if only 40% do, there’s work to do disseminating that knowledge more widely. Another approach is **knowledge tests** at workshops (simple before-and-after polls on key facts). These not only gauge baseline common knowledge but can show gains in shared knowledge after an intervention.
### **3. Knowledge Distribution and Reach:**
A healthy common knowledge means information shared by one part of the community reaches _most_ of the other members (not just small silos). Social network analysis and digital trace data are useful here. One metric is the **breadth of engagement** with knowledge resources – for instance, when a new report or best practice is posted on the CoP platform, what fraction of members view or download it? Does it circulate across subgroups? Interaction logs can reveal if knowledge is flowing broadly (many members accessing or discussing it) or getting stuck (only a few see it). **Interaction maps** can visualize this: nodes are members and links are information exchanges (replies, shares). A dense, well-connected network suggests that knowledge has pathways to become common (everyone is a few steps away from everyone else’s knowledge). By contrast, if the network map shows isolated clusters, common knowledge might be limited within each cluster. Value network analysis methods can quantify such patterns – modeling how knowledge flows and influence spreads within the CoP . For example, one could track the average number of members reached by a message within a month of its posting (a higher number indicates faster diffusion towards common knowledge). Visual network maps, as suggested in knowledge management practices, make it easy to spot these patterns .
### **4. Participation and Co-Creation Rates:**
Common knowledge grows when members actively share and co-create content. Thus, engagement metrics serve as proxy indicators. High attendance at knowledge-sharing events (webinars, meetings) and active participation rates (speaking up in discussions, contributing to documents) signal that members are investing in collective understanding. You might measure the **participation ratio** – what percentage of members contribute content vs. lurk – and aim to increase it over time. Even those who “lurk” are often absorbing knowledge, as studies have shown lurkers learn vicariously by reading others’ posts . Still, a core of contributors ensures fresh knowledge enters the common pool. Another metric is **content co-authorship**: e.g. number of collaborative outputs (joint reports, wiki edits) produced by multi-member teams. A rise in co-authored outputs indicates deeper shared understanding (members find it worthwhile to synthesize knowledge together). Also consider **reuse of community knowledge** – for instance, count how often members reference the community’s knowledge repository or past discussions in new conversations. When members frequently say “As we discussed last quarter…” or cite a community case study as evidence, it shows that knowledge is not only shared but _accepted as common reference_.
### **5. Quality of Shared Understanding:**
Beyond quantity and reach, the _quality_ of common knowledge matters. Some qualitative or proxy indicators include:
- **Consistency in Decision-Making:** Do community projects or recommendations show alignment with the shared knowledge base? If multiple teams in a resilience CoP independently design solutions that reflect the same guiding principles, that’s a sign of shared mental models. You could review project proposals or plans for consistency with the community’s agreed-upon best practices or principles.
- **Reduction in Basic Questions:** Track the nature of questions asked in forums or meetings. As common knowledge solidifies, you might expect fewer “basic 101” questions (because that info is already known or documented) and more advanced, context-specific questions. A downward trend in questions about foundational topics (or quick answering of those by peers pointing to existing answers) indicates a mature common knowledge base.
- **Member Confidence and Trust:** Conducting periodic surveys on perceived common knowledge can be insightful. Ask members if they feel “most people in the CoP understand X” or if they trust information coming from the CoP. High confidence that “others know what I know” reflects strong mutual knowledge. Likewise, if members report that they readily adopt suggestions or outputs from the community, it implies the community has established credibility through common understanding.
- **Story Alignment:** In workshops, listen for whether different members tell _compatible stories_ about the community’s domain. For example, if one practitioner describes the goal of nature-based solutions as “enhancing ecosystems for resilience” and another says “using nature for adaptation and co-benefits,” are these seen as reinforcing (common perspective) or at odds? A convergent narrative, even if told in different words, points to shared understanding of purpose. Some facilitators use exercises like collective mapping of the domain (each member writes key elements of a problem/solution, and the overlaps are noted) to gauge conceptual alignment.
No single metric will perfectly capture “common knowledge,” but by combining multiple indicators, we can build a picture. For instance, a CoP might set targets such as: 80% of members can identify our three priority practices (survey result); knowledge resources reach at least 70% of members (web analytics); at least 50% of members have contributed in the last quarter (participation logs); and zero instances of conflicting definitions in official outputs (content audit). These kinds of metrics make the abstract concept of common knowledge more tangible and manageable.
# **Data Sources and Strategies for Evaluating Common Knowledge**
To apply the metrics above, we need to gather data from various sources in and around the community of practice. Here are some practical data sources and evaluation strategies we should consider:
### **Surveys and Self-Assessments**
Surveys are a straightforward way to gauge shared knowledge and perceptions. They can be administered annually or after major events. For example, you might survey members with true/false or multiple-choice questions on important domain facts, or ask Likert-scale questions like “I feel that members of this CoP share a common understanding of [topic].” Surveys can also capture _perceived_ common knowledge – e.g. “How confident are you that others in the community are familiar with X?” – which speaks to the mutual awareness aspect. It’s often insightful to ask some open-ended questions as well (e.g. “What core practices do you think _everyone_ in the community should know?”) and see if answers converge. A tip is to compare survey results for newcomers vs. veterans: if there’s a large gap in basic knowledge, that suggests onboarding needs strengthening to build common knowledge from the start.
### **Meeting and Interaction Analysis**
Every meeting or discussion generates a transcript or notes that can be analyzed qualitatively or with text analysis tools. By coding these transcripts, one can look for evidence of common knowledge. For instance: Are the same key issues coming up repeatedly (shared concerns)? Do different members reference the same data or sources (shared reference points)? Are clarifications frequently needed on fundamental terms (which would indicate lack of common ground)? One could set up a simple content analysis to count how often agreed-upon focal terms (say the “5 Things” from the earlier example) appear in meeting notes over time – an increase might indicate deeper integration into group discourse. Conversation analysis can also reveal dynamics: if members rarely have to ask “What do you mean by that?” or correct misunderstandings, it suggests implicit common ground. On the flip side, if meetings often get bogged down in aligning terminology, it flags an area for improvement. Even **tone** and participation patterns can be telling: as common knowledge solidifies, discussions may shift from instructional (experts teaching others) to collaborative (peers building on each other’s ideas), and more voices may chime in without fear of not understanding.
### **Interaction Maps and Network Analysis**
As mentioned, mapping the social network of the community provides a powerful visualization of knowledge flow. Using tools (like Gephi or simple graphing based on communication logs), you can create a network graph where nodes are members and edges represent interactions (such as Q&A exchanges, co-working on a task, or direct messaging). By annotating or analyzing this graph, you might identify _hubs_ (people who disseminate knowledge widely) and _bridges_ (connections between subgroups). If common knowledge is a goal, you want to see knowledge brokers connecting different parts of the network and no critical information bottlenecked in one subgroup. One can calculate network metrics like **density** (how connected everyone is on average) or **centralization** (whether a few nodes dominate communication). A more dense, decentralized network usually supports more shared knowledge because information can circulate in multiple paths. Value network analysis goes a step further by not just mapping who talks to whom, but qualitatively assessing what _kind_ of knowledge or value flows along those links (expertise, feedback, support, etc.) . This can uncover, for instance, that everyone relies on a certain sub-team for technical data – which might be fine, but to build common knowledge, the community might then encourage that sub-team to present tutorials to all.
### **Digital Trace Data**
Modern communities, especially those partially or fully online like CanAdapt, generate a wealth of digital traces – project listings, updates, likes, comments, forum posts, chat logs, document edits, wiki revisions, and so on. These can be mined (respecting privacy and ethics, of course) to measure knowledge dynamics. For example:
### **Platform Analytics**
Utilize web analytics to see page views, file access counts, search queries, etc. A frequently accessed project page or resource indicates it’s common reference material. If certain topics in our forum have lots of discussion threads, or updates that have lots of likes or comments, that highlights focal areas of knowledge. We can also determine if usage is widespread or limited to a few power users.
### **Knowledge Base Health**
In our shared knowledge base (both canadapt.network and canadapt.wiki), we can track metrics like number of contributions per month, update frequency, and the ratio of views to edits. A growing, actively maintained repository usually correlates with growing common knowledge (it shows the community is externalizing what it knows for all to see). Conversely, if the repository is stagnant or rarely consulted, much knowledge might remain siloed in individuals.
### **Feedback and Comment Analysis**
We can also analyze our feedback and post comments. Simple metrics like the average response time to questions and comments can indicate responsiveness (if someone asks “Has anyone tried X?”, and multiple peers respond quickly with answers, it demonstrates that knowledge is readily available and shared). Also, analyzing the content of messages with natural language processing can identify common themes and sentiment – for instance, detecting increased usage of words like “we all” or “as agreed” over time might reflect emerging consensus.
### **External Feedback and Outcomes:**
Sometimes looking outside the immediate community provides insight into its common knowledge. If the CoP produces outputs for external audiences (reports, recommendations, trainings delivered), one can evaluate those outputs for coherence and consistency. Do they speak in a unified voice? Are messages consistent across outputs (suggesting an internal common understanding)? Also, funders might look at **outcome indicators** such as how many member organizations have adopted practices promoted by the CoP. If multiple distinct organizations start implementing a similar approach (that the CoP advocated), it implies the knowledge became common enough to spur coordinated action. For example, if five cities in the network all develop a climate risk map following the same template, that hints that the template’s importance was common knowledge in the CoP.
In using these strategies, it’s wise for us to mix quantitative and qualitative evaluation. Numbers can signal _that_ something is working (e.g., rising participation rate), while interviews or open-ended survey responses tell _why_ (e.g., members feel a greater sense of shared purpose). Also, be careful not to over-measure in ways that burden the community; many metrics can be gathered unobtrusively (like analytics) or folded into regular activities (like a quick poll at the end of a webinar). The aim is to inform facilitation: by knowing the state of common knowledge, leaders can decide to, say, run a refresher session on a concept, or connect a sparsely linked subgroup to the rest through a buddy system.
# **Conclusion: Checklist for Cultivating Common Knowledge in our Practitioner Networks**
Common knowledge is both a _destination_ – a state where members share key understandings – and a _journey_ of continual alignment as new people and ideas enter the community. By focusing on focal points and using smart metrics, CoP facilitators and funders can actively nurture this foundation of collective learning. Below is a checklist of diagnostic questions to regularly ask of your community of practice, ensuring that common knowledge is being effectively cultivated:
- **Shared Purpose & Vocabulary:** Have we clearly defined our CoP’s core purpose and terms, and do members consistently articulate them? (Test: Ask a few members to explain the community’s mission or define a key term – do you get similar answers?)
- **Key Focal Points:** What are the current focal points of our community? (e.g., a unifying goal, framework, or story everyone refers to.) Are these focal points widely recognized and endorsed by the members? If not, how can we establish ones that resonate?
- **Knowledge Access:** Is important knowledge easily accessible to all members? (Consider: Do we have a central repository or regular forums where collective knowledge is maintained? Are newcomers pointed to the “must-read” or “must-know” resources quickly?)
- **Knowledge Gaps:** Where do knowledge gaps or misunderstandings still exist among members? (Check for signs like recurring basic questions or uneven participation. Use surveys or feedback forms to ask members what topics they feel _less_ confident about that others seem to know.)
- **Engagement & Sharing Culture:** Are members actively sharing their insights and experiences? (Look at participation logs: if only a small fraction are contributing, plan activities to involve more people. Also foster an atmosphere where “no question is stupid” – this encourages those who lack certain common knowledge to speak up and get help, thereby bringing them into the loop.)
- **Alignment in Practice:** When the community collaborates or when members apply community knowledge in their work, do their approaches show a common basis? (For example, review a few independent projects from members – do they reflect shared principles or wildly different philosophies? Consistency suggests strong common knowledge, while divergence might mean the community needs to discuss and reconcile understandings.)
- **Feedback and Evolution:** Do we regularly solicit feedback on our knowledge-sharing processes and adapt them? (A CoP should evolve its focal points and methods as the domain and membership change. Make sure there are channels for members to say “I think we’re not all understanding X” or “We need more knowledge on Y.” Adjust activities accordingly, such as introducing a new focal training or updating the knowledge base.)
Using this checklist, we can perform a “common knowledge audit” and take action as needed – perhaps funding a workshop to build shared capacity in a weak area, or developing a community handbook or videos to codify tacit know-how. In summary, cultivating common knowledge in a community of practice is an ongoing, dynamic process. When done well, it should create a virtuous cycle: shared knowledge leads to stronger relationships and coordinated action, which in turn generates new shared learning. By being intentional about focal points and measurement, we ensure this cycle keeps turning productively, empowering communities to tackle big challenges like climate adaptation and resilience with a united understanding and purpose.
[^1]: https://www.theguardian.com/books/2025/sep/29/when-everyone-knows-that-everyone-knows-by-steven-pinker-review-communication-breakdown#:~:text=The%20central%20theme%20is%20simply,the%20seductive%20but%20potentially%20suicidal
[^2]: https://en.wikipedia.org/wiki/Common_knowledge_(logic)#:~:text=Common%20knowledge%20is%20a%20special,G%7Dp
[^3]: https://opentextbc.ca/workinggroupguide/chapter/communities-of-practice/#:~:text=3,meetings%20for%20sustaining%20these%20interactions
[^4]: https://eprints.usm.my/34140/1/9_Paradigm_Geeta_14.pdf#:~:text=existence%20of%20common%20knowledge%20and,information%20professionals%20in%20the%20community
[^5]: https://en.wikipedia.org/wiki/Focal_point_(game_theory)#:~:text=In%20game%20theory%20%2C%20a,on%20time%2C%20place%20and%20people
[^6]: https://ndcpartnership.org/knowledge-portal/climate-toolbox/iucn-global-standard-nature-based-solutions