Search by Categories

image
  • March 17, 2026
  • Arth Data Solutions

BNPL & Small-Ticket Lending: Gaps the Models Don’t See

BNPL & Small-Ticket Lending: Gaps the Models Don’t See

The first time a BNPL or small-ticket portfolio really worries a lender, it’s rarely because of a model report.

It’s usually because of a surprise in a big-book meeting.

You’re in a quarterly risk review at a universal bank.

Most of the time is reserved for home loans, LAP, large corporate exposures. The unsecured slide is meant to be a quick update before lunch.

One page on credit cards. One on personal loans.

Then a slide appears with a calm heading:

“BNPL & Small-Ticket Portfolios – Q3 Snapshot”

The bullets look innocuous:

·         Active users: up 62% year-on-year

·         Average ticket: ₹2,800

·         Loss rate: 2.1% of value, “within plan”

The analytics lead adds:

“Models remain stable. Gini’s around 0.5.

Losses are broadly in line with our original assumptions.”

The business head smiles:

“This is our acquisition engine.

These are small, short-tenor exposures. Even if we get it slightly wrong, it won’t sink the ship.”

Someone scrolls to the backup slides.

Buried there is a table the collections team insisted on adding:

·         For a cohort of customers who started with BNPL and later took personal loans and cards with you:

– BNPL loss rate looks contained.

– But 12-18 months out, their PL and card delinquencies are running 40-50% higher than comparable non-BNPL cohorts.

– A few PIN-code clusters and digital channels stand out.

The Head of Collections says quietly:

“We’re seeing a pattern.

Many of the customers in our tougher buckets started with these ‘small tickets’.

BNPL doesn’t blow up on its own. It shows up later in the main book.”

The room shifts.

The CRO asks:

“Did our BNPL models ever flag these customers as higher risk?

Or did we treat them as ‘low-risk, high-engagement’ and fast-track them into bigger products?”

The analytics lead hesitates:

“From a pure BNPL lens, their behaviour was great.

High utilisation, timely repayments.

So they scored well for cross-sell.”

Nobody says it bluntly, but the pattern is clear:

·         The BNPL models did their job inside the narrow product.

·         The institution read that as a green signal for everything else.

The underlying belief in the room is still simple:

“Our BNPL and small-ticket models are calibrated.

Losses are within plan.

If there was a real risk problem, it would show up there.”

That belief is exactly where the gap starts.

 

The belief: “If small-ticket models look stable, the risk is contained”

Across banks, NBFCs and fintechs playing in BNPL or small-ticket lending, you hear versions of the same sentence:

“These are tiny exposures, short-tenor, highly diversified.

Our models are trained on millions of events.

Loss rates are within plan.

If there was a structural issue, it would already be visible.”

It shows up in different rooms.

In a product steering meeting:

“BNPL is our low-risk entry product.

It drives engagement and builds repayment history.

We’re not taking large balance-sheet bets here.”

In a funding or investor deck:

“Small-ticket exposures are algorithmically underwritten.

They diversify risk and help us discover good customers earlier.”

In a model-validation discussion:

“The BNPL model’s Gini is healthy.

Bad rates by decile are monotonic.

Population stability is acceptable.”

Underneath is a quiet assumption:

·         If the small-ticket portfolio looks controlled in its own metrics, it is not a strategic risk.

That feels reasonable.

The uncomfortable part is:

BNPL and small-ticket lending don’t just live in their own boxes.

They change how the same customer behaves across your entire relationship.

The gaps in the models are not only about prediction accuracy.

They’re about what the models never set out to measure.

 

What actually happens inside BNPL & small-ticket models

If you sit with the modellers and the product teams and follow how these systems are actually built, a few patterns repeat.

1. Models are tuned to the product, not to the person’s future

Most BNPL and small-ticket models are optimised for a very specific question:

“Will this customer repay this small, short-tenor obligation on time?”

So the model code and spec documents talk about:

·         30-90 day outcomes

·         Probability of default or severe delay on the first few cycles

·         Charge-off behaviour within the BNPL or small-ticket book

Input features are often:

·         Basic KYC and device attributes

·         Limited bureau signal (where available)

·         App and transaction patterns on your own platform

·         Sometimes bank-statement or UPI patterns

On paper, it’s rational:

·         Short-tenor product → short observation window.

·         Aggregate risk small → tighter focus on near-term loss.

What is rarely in scope for the model is:

·         “What does this pattern of BNPL usage tell us about the customer’s overall leverage and future behaviour?”

·         “What is the customer likely to look like when they migrate into PL, card, or bigger lines?”

So the model can be “good” at its stated job, while being blind to:

·         Stacking across multiple BNPL providers.

·         Signs of dependency on small credit for essentials.

·         Behaviour that is a good signal for current pay a few hundred rupees, but a bad signal for future resilience at higher limits.

The confusion happens later, when the organisation uses “good BNPL behaviour” as if it were a general certificate of credit quality.

The model never promised that.

2. Data is rich on events, thin on context

Small-ticket and BNPL platforms generate beautiful telemetry:

·         Number of purchases.

·         Time between order and repayment.

·         Merchant categories and ticket sizes.

·         Device IDs, app versions, geolocation pings.

·         Tap behaviour that product managers can talk about for hours.

For modellers, it’s a playground.

You see features like:

·         Average cart value over last N purchases

·         Time-of-day patterns of transactions

·         Ratio of discretionary to “essential” spend

·         Contact graph density (when messaging or social data is available)

On the bureau side, signals are often:

·         Thin or missing for many customers.

·         Lagged for small-ticket trades.

·         Aggregated in ways that don’t distinguish BNPL rails from other revolving credit.

So the model ends up:

·         Giving high weight to local, platform-specific behaviour.

·         Giving limited weight to a fuzzy view of total external obligations.

From a product standpoint, that’s fine:

·         You’re deciding on ₹2,000-₹10,000 tickets.

·         You’re relying heavily on what you see in your own app.

From a portfolio standpoint, it’s less comfortable:

·         You know almost nothing about parallel small-ticket obligations on other apps.

·         You have only a partial picture of their overall repayment load.

·         Your “good” BNPL customer may be one of five “good” customers across different platforms.

The model can’t see that.

Your later products feel the impact.

3. Losses are smoothed away inside the business model

The way BNPL and small-ticket programmes are structured often makes risk look tamer than it is.

You see it in P&L discussions and SteerCo packs:

·         Merchant discount rates (MDR) and fees offset a portion of credit loss.

·         Late fees and other charges soften the hit.

·         Some platforms subsidise losses as “customer acquisition cost”.

So the overall margin slide shows:

·         “Net credit loss: 2.1-2.4% of GMV”

·         “Unit economics: positive or near breakeven”

What that doesn’t tell you:

·         How many customers are quietly rolling between partial payments, short-term top-ups, and “grace” arrangements that never quite show up as full default.

·         How many are relying on small-ticket credit for day-to-day cashflow, with no real path to deleverage.

·         How many later show up elsewhere in your own book as stressed PL or card customers.

The models can look fine.

The P&L can look acceptable.

The human pattern of reliance on micro-credit is not in those numbers.

 

Where the gaps become visible – late

If the models and P&Ls look reasonable, where do you actually see the problem?

It usually appears in three places, much later than you’d like.

1. Cross-product vintage curves

When someone in risk finally asks for a joined-up view, the picture changes.

You get a table in an internal portfolio note that looks roughly like:

·         Cohorts of customers who started with BNPL or a small-ticket line in 2022.

·         Split into:

– those who only ever used BNPL, and

– those who later took PL, cards, or bigger lines with you.

For the second group, you see:

·         Vintage curves on PL or cards that sit above the standard book from month 9 onwards.

·         Higher overlimit and roll-forward patterns.

·         More frequent “promise to pay” calls in dialler logs.

No one is surprised to see some difference.

What worries people is the consistency across channels and PIN codes, even after you strip out obvious fraud or gaming.

The BNPL model did not predict this.

It was never built to.

The organisation still treated “clean BNPL history” as a sign of broad credit health.

2. Collections notes and call logs

Another place where BNPL-related stress leaks is in the notes that nobody reads in meetings.

If you scan through collections systems and call-centre CRMs for BNPL-heavy customers, you find patterns like:

·         “Multiple small loans across apps; customer confused who is calling for what.”

·         “Customer says they didn’t realise this would affect their ‘main bank’ credit.”

·         “Customer is paying essentials with BNPL – groceries, medicines, utilities.”

From a model’s perspective:

·         Those were just successful repayments and high utilisation for months.

From a collections perspective:

·         They are signals that the customer’s overall financial position is fragile.

·         They’re compensating for income volatility or expense shocks with serial micro-credit.

Again, the BNPL PD model didn’t have this as a target variable.

It’s not a model flaw.

It’s a scope flaw.

3. Complaint and dispute patterns

In some banks and NBFCs, BNPL and small-ticket products show up disproportionately in:

·         Ombudsman escalations.

·         Social media complaints.

·         Internal grievance logs.

Themes:

·         Confusion about due dates and amounts when multiple small credits are open.

·         Disputes about merchant refunds and how they interact with BNPL dues.

·         Surprise about how BNPL behaviour affected eligibility for larger loans.

None of this appears in:

·         The model performance slide.

·         The “losses within plan” message.

It appears in:

·         A weekly complaints MIS that only the customer-experience team reads.

·         An occasional internal audit observation about communication clarity.

By the time these patterns get executive attention, the models have already been embedded in multiple journeys and cross-sell flows.

 

Why the gaps stay invisible early

Given all of this, you’d expect more anxiety around BNPL models.

In practice, three things keep the comfort alive.

Small unit size makes strategic risk feel “not worth the time”

In most institutions, leadership time is rationed.

When there’s a choice between:

·         A ₹5,000 crore corporate exposure

·         A ₹50 crore BNPL book with 2-3% planned loss

the BNPL book doesn’t get much airtime.

Risk committees and Board packs give it:

·         One slide.

·         A short remark: “growing as per plan, losses contained”.

Nobody wants to spend half an hour debating model scope for a product whose unit size is a fraction of the main portfolio.

The problem is:

·         BNPL doesn’t stay in its corner.

·         It changes who enters your main book, and how.

By the time that linkage is visible, several vintages are through.

Metrics are fragmented by product

Standard dashboards are structured by division:

·         Cards, PL, home loans, LAP.

·         BNPL / small ticket sits in a separate tab or even a separate system.

So the people looking after:

·         BNPL models and losses see their world.

·         PL and card teams see theirs.

Very few dashboards show:

·         “Behaviour in other products by BNPL usage”,

or

·         “BNPL intensity for customers who later defaulted on PL / cards.”

So the connection feels anecdotal, even when it’s already in the data.

It takes a conscious request from someone senior to get that cross-product view.

Until then, each team is pretty sure their models are fine.

Model governance is scoped too narrowly

Model-validation and governance artefacts are typically shaped around:

·         Definition of the model (what product, what outcome).

·         Performance of that model within that product.

·         Back-testing, overrides, stability.

For BNPL and small-ticket, there is almost never a standard question in the pack that says:

“How are we using outputs from this model elsewhere?

What evidence do we have that this is safe?”

As long as:

·         Bad rates track plan, and

·         Gini stays reasonable,

validation is satisfied.

The system-level use of that model’s outputs remains unexamined.

 

What more experienced teams do differently with BNPL & small-ticket risk

Institutions that seem calmer about BNPL and small-ticket don’t necessarily have better algorithms.

They are more explicit about what the models can and cannot tell them.

A few patterns stand out.

1. They refuse to treat BNPL “good” as a general badge

In internal notes and decision rules, you see phrases like:

·         “Good BNPL behaviour = good for BNPL.

It is supportive, not sufficient, for PL/cards.”

They act on that by:

·         Requiring separate checks (income, external leverage, behaviour on main products) before offering higher-ticket credit.

·         Capping how aggressive cross-sell can be purely on the back of BNPL usage and on-time repayment.

·         Flagging some high-intensity BNPL patterns as neutral or cautious, not automatically positive.

The models stay as-is.

The interpretation layer gets sharper.

2. They build at least one cross-product view into standard MI

Instead of waiting for ad-hoc analysis, they ask for a standing page in the unsecured risk deck:

·         “Performance of PL and cards by BNPL usage pattern.”

The page is simple:

·         Break PL and card customers into cohorts based on prior BNPL behaviour:

– no BNPL exposure,

– light use,

– heavy BNPL / small-ticket use (by count or amount).

·         Show GNPA, roll-rate, and overlimit behaviour by cohort.

·         Cut by a few key slices: channel, PIN-code clusters, major partners.

This does two things:

·         Makes it impossible to say “BNPL is too small to matter” with a straight face, once you see the curves.

·         Helps distinguish which BNPL behaviours are benign and which correlate with future stress.

They are not looking for a perfect rule.

They are looking to avoid flying blind.

3. They adjust the questions in model governance

For BNPL and small-ticket models, more experienced teams add two or three blunt lines into the validation template:

·         “Where are outputs of this model used outside the product?”

·         “What risks could arise if those uses assume this PD is a proxy for broader credit quality?”

·         “What evidence do we have that heavy reliance on this model’s ‘good’ segment is safe for larger products?”

It doesn’t turn into another pseudo-framework.

It simply forces a conversation that would otherwise never happen.

Sometimes the outcome is small but important:

·         A decision to slow down PL cross-sell for certain BNPL-heavy profiles.

·         An agreement to pilot different treatment paths in collections for customers with high BNPL reliance.

·         A note that certain partners or channels producing BNPL customers with poor later behaviour will be reviewed.

4. They treat small-ticket telemetry as a risk signal, not just an engagement metric

Product teams love to celebrate:

·         Session length.

·         Click-through rates.

·         Repeat purchase frequency.

More seasoned risk teams quietly ask:

·         “At what point does high-frequency micro-credit usage start to look like dependency, not engagement?”

So in some places, you see simple internal heuristics:

·         Above a certain frequency or amount of BNPL / small-ticket usage, customers are not fast-tracked into larger unsecured lines without additional checks.

·         Certain usage patterns (e.g., heavy use for essentials across multiple months) trigger softer communication and monitoring, not just more offers.

Nobody pretends these rules are perfect.

They do reflect an honest view:

·         “The model can’t tell us this.

We have to make a judgement call.”

 

A quiet close: one customer, multiple “small” loans

It is easy, in a busy lending institution, to let BNPL and small-ticket exposure stay in their comfortable box:

·         Small.

·         Short-tenor.

·         Modelled.

·         Within planned loss.

If you keep them there, the models will continue to look fine:

·         PDs will calibrate.

·         Losses will track plan.

·         Dashboards will stay green.

And every quarter, someone in PL or cards will quietly say:

“We’re seeing a lot of tough cases who started with these ‘easy’ products.”

A different way to look at it is to pick a single customer:

·         One who started with a ₹3,000 BNPL purchase on a phone.

·         Took five more small tickets across apps.

·         Then accepted a ₹1.5 lakh personal loan when the offers came.

·         Twelve months later, is juggling calls from three different collections teams.

Then ask in a meeting:

“At which points did our models say ‘yes’ because the numbers were small and the curves looked fine –

and at which points did we stop asking whether all these small yeses were turning into one big problem for the same person?”

The answer isn’t in the PD charts.

It’s in whether you’re willing to treat BNPL and small-ticket models as narrow tools, or as broad comfort.

The institutions that stay out of trouble don’t make their models smarter first.

They make their questions sharper.