How to Implement OBE for NBA Accreditation: Mapping & Attainment

By Kramah Team
OBE Implementation Guide For NBA Accreditation

Outcome-Based Education (OBE) has become the central pillar of NBA accreditation for one simple reason: it shifts the focus from what teachers cover to what students actually achieve. NBA’s manuals, workshops, and SAR formats repeatedly underline this shift. For SAR 2025 and beyond, the expectation is explicit—institutions must prove that OBE is implemented in teaching, learning, and assessment. Not just written in files. Not just shown in templates. Implemented.

Accreditation teams now look for a complete evidence trail: clearly written outcomes, mapped assessments, CO and PO attainment calculations, rubric-based evaluations, and documented continuous improvement. This full OBE cycle—define → align → assess → analyze → improve—is the backbone of accreditation. Institutions that demonstrate this cycle with transparent, data-backed methods position themselves strongly in the evaluation process.

OBE and NBA: Core Concepts

NBA’s OBE model revolves around four interconnected layers of outcomes:

  • Course Outcomes (COs): What students should know or do by the end of a course. Crucially, these must be written using action verbs from Bloom’s Taxonomy (e.g., Analyze, Design, Evaluate) to ensure assessments measure the correct cognitive level. These are verified through tests, labs, assignments, projects, viva, and other assessment components.
  • Program Outcomes (POs): NBA’s graduate attributes—12 outcomes that every engineering graduate must attain, plus any additional outcomes defined by the institution.
  • Program Specific Outcomes (PSOs): Discipline-oriented abilities that reflect the unique strengths of each program.
  • Program Educational Objectives (PEOs): Career or professional achievements expected a few years after graduation.

NBA tied accreditation directly to OBE adoption around 2013, and every manual since then reinforces the requirement that curriculum design, teaching practices, and assessments must align with these outcomes. It’s not enough to declare the outcomes; institutions must show how they shape the curriculum and how student performance demonstrates attainment.

OBE implementation for NBA follows a consistent logic: outcomes are defined, curriculum and assessments are aligned to them, attainment is measured through direct and indirect evaluation, and improvement actions are taken based on gaps. This is the continuous loop NBA expects to see in SARs and during visits.

OBE CO–PO–PSO–PEO Mapping Framework

Mapping is the structural foundation that links the entire OBE system. Without it, attainment calculation has no direction and no justification. NBA expects every course to map its COs to the POs and PSOs it supports, creating a transparent chain from classroom learning to program-level achievements.

A widely accepted mapping method uses a 3-level strength scale:

  • 1 – Low
  • 2 – Medium
  • 3 – High

Each CO is mapped to the POs and PSOs it genuinely contributes to, and these mapping strengths become weights during PO/PSO attainment calculation. When CO attainment is computed, these values feed the program-level outcomes by amplifying or reducing each CO’s influence according to the mapping strength.

Mapping also plays a major role in curriculum design. NBA workshops advise institutions to start from PEOs and POs, then design curriculum and COs to ensure every PO and PSO is adequately covered. This backward design approach prevents gaps and ensures alignment throughout the program.

Documentation is critical. Evaluators expect mapping matrices, minutes of curriculum meetings, mapping justifications, and version histories. Consistent record-keeping proves that mapping decisions were intentional, reviewed, and based on academic rationale rather than filled out for compliance.

Direct vs Indirect Assessment

Direct and indirect assessment form the two core evidence streams in OBE-driven NBA accreditation. Direct assessment measures what students actually demonstrate through their work, exams, quizzes, labs, projects, assignments, and viva. Every question or task is mapped to specific COs, so the scores directly reflect how well each outcome has been met.

Indirect assessment captures stakeholder perception. Course-exit surveys, program-exit surveys, alumni feedback, and employer feedback provide insight into how well outcomes are being achieved from the learners’ and industry’s point of view. These are not performance scores but structured opinions mapped to COs, POs, and PSOs.

Most NBA-aligned institutions use well-recognized weightages:

  • CO attainment: 90% direct + 10% indirect
  • PO/PSO attainment: 80% direct + 20% indirect

These ratios aren’t mandatory, but they must be consistent and justified. Institutions need to document why a particular split is chosen, how it aligns with their assessment design, and how it remains stable across batches.

CO Attainment: Step-by-Step Calculation

1. Setting Performance Targets

OBE models begin by defining what “success” looks like. Programs set a target percentage of students, commonly 60–70%, who should score at or above a defined threshold, often 40–50% of the maximum marks for each CO. NBA expects these targets to be dynamic, if a target is consistently met for three consecutive years, the department should raise the threshold (e.g., from 60% to 65%) to demonstrate continuous improvement. This creates a measurable benchmark that determines attainment.

2. Mapping Assessment Items to COs

Every assessment item must be tagged to one or more COs. Question papers, lab experiments, rubric criteria, and project components each carry CO labels. This tagging ensures CO-wise marks can be extracted and used in the calculation.

3. Computing Direct CO Attainment

For each CO, the percentage of students who meet or exceed the threshold is calculated. That percentage is then converted into an attainment level using the institution’s level scale.

Example:

  • Level 1 = low
  • Level 2 = moderate
  • Level 3 = high

Some institutions use 5-level scales (very low to very high) ranging from 0.5 to 3, as seen in NBA workshop slides.

4. Indirect CO Attainment

Course-exit surveys provide students’ perceived achievement of the COs. These responses are scaled to match the same attainment scale used for direct attainment levels. Indirect attainment is usually minor but reinforces credibility.

5.5 Combining Direct and Indirect

The final CO attainment is computed by combining both streams using predefined ratios, most commonly 90% direct + 10% indirect. This blended score becomes the official CO attainment value for the course.

PO / PSO Attainment: Calculation and Aggregation

1. Course-Level PO/PSO Attainment

Once CO attainment values are available, they are multiplied by the CO–PO/PSO mapping strengths (1, 2, or 3). This weighted combination gives the PO/PSO attainment contributed by each course.

2. Program-Level Direct Attainment

For each PO or PSO, the program aggregates attainment from all courses mapped to that outcome. Institutions either take a simple average or apply credit-weighted averages to reflect the importance of specific courses.

3. Indirect PO/PSO Attainment

Program-level surveys, program-exit surveys, alumni surveys, and employer surveys, provide indirect PO and PSO attainment values. These are scaled to the same level system as the direct attainment.

4. Final PO/PSO Attainment

Direct and indirect values are combined using widely-used ratios such as 80% direct + 20% indirect. NBA’s sample orientations often demonstrate this calculation: course-level PO contributions are aggregated, averaged, and then merged with indirect survey levels to produce the final attainment level. Since the mapping strength uses a scale of 1–3, the final calculated PO attainment will also fall between 0 and 3. A result of 2.2 or higher is generally considered excellent alignment.

This layered calculation, from CO attainment to mapping-weighted PO/PSO attainment to combined final scores—forms the primary evidence NBA expects to see in SARs and during accreditation visits.

Rubrics: Design, Levels, and Integration into Attainment

1. Role of Rubrics in OBE–NBA

Rubrics are essential for assessing complex, higher-order learning tasks, labs, mini-projects, full projects, seminars, internships, and capstone work. These activities often address higher Bloom’s levels and contribute heavily to POs related to problem analysis, design, ethics, teamwork, and modern tool usage. NBA evaluators expect rubric-based assessment records because they provide structured, transparent evidence of how performance aligns with outcomes.

2. Structure of Good Rubrics

Effective rubrics in NBA–OBE are analytic, not holistic. They break down performance into multiple criteria such as design ability, implementation quality, documentation, teamwork, or presentation skills. Each criterion is explicitly mapped to relevant POs or PSOs, making the link between performance and outcomes traceable.

Most institutional models and OBE handbooks recommend 3–5 performance levels for example:

  • Unsatisfactory
  • Satisfactory
  • Good
  • Excellent

Each level carries a descriptor and a numeric score aligned with the institution’s attainment scale. This clarity ensures consistency in scoring and makes it easier to convert rubric output into CO attainment.

3. Using Rubrics in CO and PO/PSO Attainment

Rubric scores are first mapped to the COs associated with the activity. Once converted to attainment levels, these CO attainments flow upward into PO and PSO attainment through the CO–PO mapping matrix. This chain—rubric → CO level → PO/PSO level—is a key part of OBE evidence.

Consistency is critical. NBA evaluators expect similar rubric templates across courses, uniform level scales, and clearly recorded evaluations. This consistency shows that the system works as a coordinated assessment mechanism rather than an isolated set of activities.

Continuous Improvement and SAR 2025 Expectations

Continuous Quality Improvement (CQI) is a central requirement for SAR 2025. NBA wants institutions to demonstrate the complete cycle:

define → design → measure → analyze → act → re-measure

It’s not enough to compute attainment once. Programs must maintain multi-year attainment data, compare trends, and show how gaps triggered real academic actions.

Examples of CQI actions—derived strictly from the provided research—include:

  • Revising CO statements for clarity or alignment.
  • Reducing or increasing CO–PO mapping strengths based on curriculum relevance.
  • Redesigning assessments to better measure targeted outcomes.
  • Updating or expanding syllabi.
  • Introducing more design-oriented or industry-oriented tasks.
  • Strengthening internships, practical exposure, or activities linked to specific lagging POs.

These actions must be documented with minutes, revisions, and follow-up attainment results to demonstrate measurable improvement.

Conclusion

For SAR 2025, the message from NBA is clear: accreditation depends on how transparently and rigorously an institution implements OBE. Every part of the system, CO statements, assessment design, rubric evaluations, mapping matrices, attainment calculations, and improvement actions—must create a visible, evidence-driven chain that shows students are actually achieving the intended outcomes.

Accurate CO and PO calculations prove that learning is being measured correctly. Well-structured rubrics ensure that complex skills are assessed with fairness and clarity. A functioning CQI cycle shows that the program learns from its own data, acts on gaps, and continuously raises its academic standards. When all these elements work together, they build the credibility NBA expects to see. They turn OBE from a paperwork requirement into a functioning academic engine, and that’s what ultimately strengthens a program’s case for accreditation.

Role of Technology in OBE

Effective OBE needs accuracy, consistency, and reliable evidence, semester after semester. Digital platforms make this much easier by automating CO–PO mapping, tagging assessments, converting rubric scores into attainment levels, and generating clean dashboards for NBA review.

Systems like Kramah’s KI-OBE Software bring all these pieces together in one workflow, reducing manual effort and keeping the full OBE cycle transparent and audit-ready. For institutions implementing OBE seriously, technology serves as the backbone that keeps mapping, evaluation, and CQI aligned with NBA expectations.

Frequently Asked Questions

(FAQs)

What is OBE in NBA accreditation?

Outcome-Based Education (OBE) in NBA focuses on defining COs, aligning them with POs and PSOs, assessing student performance, and proving attainment through measurable, evidence-based methods.

How do you write Course Outcomes (COs) for NBA?

COs must be written using action verbs from Bloom’s Taxonomy such as Analyze, Design, Evaluate, ensuring assessments measure the correct cognitive levels.

How is CO–PO mapping done in OBE?

CO–PO mapping uses a strength scale of 1 (Low), 2 (Medium), 3 (High) to show how each CO contributes to specific POs. These strengths act as weights during PO attainment calculation.

How do you calculate CO attainment?

CO attainment is calculated by setting performance targets, mapping assessment items to COs, computing direct attainment, combining indirect survey responses, and merging them using a ratio like 90% direct + 10% indirect.

What does Continuous Improvement mean in SAR 2025?

Continuous Improvement requires programs to analyze multi-year attainment trends, identify gaps, take corrective actions (like redesigning COs or assessments), and show measurable improvement in subsequent cycles.

Which software tools help with mapping and attainment?

Ki-OBE platforms support CO–PO mapping, question tagging, automated CO/PO/PSO attainment calculation, evidence tagging, and SAR-ready reports are commonly used. Recent guides recommend tools that streamline mapping, attainment analysis, and documentation for NBA.
Shopping Basket