/

Article

Self-Consent: The Next Frontier for Financial Data Sharing

Self-consent empowers users, but needs verified data and consent-bound computation to eliminate friction, fraud, and misuse.

Written by

Berwin D

Insights

Apr 7, 2026

SHARE

Self-Consent: When Users Take Control

Scenario: A freelancer joins a new gig platform. To verify income, the platform requests bank statements. The freelancer logs into their bank account, navigates through multiple authentication screens, downloads PDFs for the last six months, opens each one to manually redact sensitive transactions, uploads them to the platform's portal, and waits.

Three days later: rejected.

The bank statements can't be verified as authentic. The platform asks for paper copies from a branch. The freelancer takes half a day off work, visits their bank, waits in line, gets stamped copies, scans them, and uploads again. Two weeks after starting the process, they finally get approval. The gig opportunity they wanted is already filled.

This is self-consent in action. The user has complete control over their data. But that control comes with infrastructure gaps that create friction at every step, and friction without verification mechanisms creates a deadlock that benefits no one.

In our previous blog in this series, we explored how users often misunderstand the word consent. The consent illusion gives users the impression that their data is bound to privacy automatically and is prevented from misuse. They click "I agree" and assume that the act of consenting creates a protective barrier around their information. It doesn't. Consent documents permission but doesn't enforce usage. The ideal solution came down to binding consent to compute, leading to selective inferencing and prevention of misuse. When computation itself becomes consent-aware, unauthorized operations become technically impossible.

Now, let's see what a user's self-consent looks like. What happens when users try to exercise direct control over their own financial data? Theory meets reality, and the gaps become visible quickly.

User Control

Let's zoom out a bit and explore what consent is truly about: User Control.

This means the user should have the ability to control what data they share (selective inference instead of blanket sharing) and how their data is being used (misuse prevention).

The concept is straightforward. Instead of signing away broad permissions to institutions, users make granular decisions about their information. They choose what to share, decide who can see it, specify how long access lasts, and understand exactly what it will be used for. No more pages of legal text where "consent" means "we can do whatever we want with your data." Instead, users become active participants in data sharing decisions.

In theory, this solves everything. Users aren't trapped by opaque terms of service. They aren't forced to choose between sharing everything or getting no service at all. They maintain agency over their financial information, granting access selectively and revoking it when needed. It's digital sovereignty applied to personal data.

In practice, two fundamental issues arise immediately. Both stem from the infrastructure gap between what users should be able to do and what the current system actually supports.

The Issues

Issue 1: Friction for the user to get their data and share with institutions (time, effort)

The first barrier is operational. Getting your own financial data and sharing it with another institution takes significant time and effort. Every step involves manual work, multiple interfaces, and potential errors.

Canada's financial regulator took an important step by quantifying the economic cost of consumer data mobility barriers [1]. This was concrete policy work. They put real numbers on what friction costs. Their research found that 28% of Canadians are Anxious Traditionalists who worry about data security during transfers. Another 12% are Reserved Skeptics who distrust the technology altogether [1]. These represent 40% of the population for whom the current self-consent model creates genuine anxiety or outright rejection.

The friction has real operational costs. Open banking ecosystems face fragmented user experiences: too many interfaces, limited interoperability, manual downloads, incompatible formats [2]. A user trying to share financial data typically navigates multiple systems. They log into their bank's online portal, find the right section for statements, select the correct date range, download PDFs, open each file to verify completeness, sometimes convert formats, redact sensitive information manually, upload to the requesting platform, and then wait for verification. Each step introduces delay. Each handoff creates risk of error or data loss.

The infrastructure treats data portability as an afterthought rather than a fundamental right. Banks provide download functions because they must, not because they've designed elegant data sharing workflows. The result is friction at every stage. Users spend time they shouldn't have to spend. They perform technical tasks they shouldn't need expertise for. And at the end of this process, they often hit the second barrier.

Issue 2: How does the data-consuming organization trust this data?

Even when users successfully navigate the friction and share their financial data, a fundamental question remains: How does the receiving organization know it's real?

Self-reported financial data faces systemic credibility issues. The numbers paint a stark picture. The IRS reports a 55% misreporting rate for self-employment income, specifically for income sources without third-party verification [3]. Fraud doesn't account for all cases. Sometimes it's honest mistakes in complex tax situations. But the result is the same: over half of self-reported income data is inaccurate.

The lending industry struggles with this daily. 83% of mortgage lenders say they struggle to use gig economy income in mortgage approvals because they can't verify it [4]. This affects millions of workers. Freelancers, contractors, gig workers, and anyone with variable income face additional barriers to homeownership not because they can't afford mortgages, but because the verification infrastructure doesn't exist. Lenders can't distinguish between accurate self-reports and inflated claims.

The auto lending sector provides even more concerning data. The industry faces $9.2 billion in fraud exposure, with 43% of that fraud coming from income and employment misrepresentation [5]. First-party fraud, where borrowers or dealerships misrepresent information, accounts for 69% of the total risk exposure. This isn't organized crime. It's systematic inflation of income figures, fabricated employment verification, and doctored pay stubs.

Rental housing shows similar patterns. 93% of apartment operators experienced fraud in 2024, with 84% seeing falsified pay stubs or income documents [6]. Landlords can't verify what renters submit. Social media platforms share tutorials on document fabrication. The infrastructure to cryptographically verify authenticity doesn't exist at scale, so recipients face a binary choice: reject all self-reported data outright and limit service access, or accept it with elevated risk and pass those costs on to honest users through higher rates and stricter requirements.

Without cryptographic proof of authenticity, there's no middle ground. A PDF downloaded from a bank portal looks identical to a PDF edited in software. A screenshot of a transaction appears the same whether it's real or fabricated. Digital documents without cryptographic signatures carry no verifiable proof of origin or integrity.

Let's Talk About Financial Data

The friction and trust problems we've outlined compound dramatically when it comes to financial information. Financial data is uniquely sensitive, highly regulated, and absolutely critical for major life decisions. But the infrastructure to share it securely and verifiably barely exists.

Consider three common scenarios where people need to share financial data with non-bank entities:

  • A freelancer applying to a gig platform needs to prove consistent income to get approved for higher-value contracts. The platform wants verification but has no direct connection to the freelancer's bank. The freelancer downloads six months of statements, redacts merchant details they consider private, uploads files, and waits. The platform's review team manually checks the documents, can't verify authenticity, and either rejects the application or accepts the risk.

  • An employee submits expense reports for reimbursement. Their company's policy requires receipt verification for amounts over a certain threshold. The employee takes photos of paper receipts, uploads them to an expense system, and hopes they don't get flagged for missing information. The finance team manually reviews hundreds of submissions, unable to verify that receipts haven't been submitted multiple times or amounts haven't been altered.

  • A tenant applies for an apartment. The landlord wants proof of employment and income stability. The tenant provides pay stubs, bank statements, and employment verification letters. None of it is cryptographically signed. The landlord must trust that documents are real or pay a third-party service to attempt verification, adding cost and delay.

Each scenario follows the same workflow: navigate online banking to download statements, manually redact sensitive information, upload files through various portals, wait for manual review, and often face rejection because verification fails. The process is slow, error-prone, and fundamentally insecure. Recipients can't confirm documents are authentic or unaltered. Senders have no visibility into how their data will be used after sharing. Both parties operate without real infrastructure.

What We Already Have

The practical and ideal scenario would be to use what we already have in place, and build on it. Rather than creating entirely new systems, we should expand existing frameworks that have proven they work. India's Account Aggregator system provides a working model.

India's Account Aggregator (AA) framework solves the verification and control problems elegantly for regulated entities. The growth metrics show rapid adoption [7]. As of December 2025, 126 financial institutions have gone live as both FIP (Financial Information Provider) and FIU (Financial Information User). More than 2.61 billion financial accounts are enabled to share data through the system. The scale of adoption is striking: FIUs grew from 128 to 435 in just one year. FIPs expanded from 29 to 151. Most telling, linked accounts grew 8x, from 8.92 million to 70.82 million [8]. This isn't theoretical infrastructure. It's operational, growing, and handling real financial data transfers at scale.

The AA framework provides three critical capabilities that address the issues we've discussed:

  • User-controlled consent: Users approve each data request explicitly. Consent is granular, meaning users can share specific account information rather than blanket access. It's time-bound, with automatic expiration. And it's revocable, letting users withdraw permission at any time. This isn't buried in terms of service. It's built into the technical architecture.

  • Verified data: Data comes directly from the source institution with no intermediate handling. When a lender requests a bank statement through AA, the data flows from the bank to the lender with cryptographic verification. There's no manual download, no PDF that could be edited, no screenshot that could be faked. The recipient knows the data is authentic because it comes from the regulated institution that holds it.

  • End-to-end encryption: AAs are "data-blind pipes" [9]. They facilitate the transfer but cannot read or store the data in transit. The information remains encrypted from source to destination. This architectural choice means even the intermediary enabling the transfer can't access sensitive financial information. Users don't have to trust the AA with their data. They only have to trust the encryption.

This works. Banks verify themselves cryptographically. Users control access through explicit consent with audit trails. Recipients get authenticated data they can trust. The system handles the complexity of secure data transfer, format standardization, and consent management so that neither party has to.

Example: A non-regulated entity wanting a bank statement from a user

But there's a limitation that reveals the system's blind spot. Basically, we need to expand it to non-regulated entities as well.

Consider this scenario: A platform wants to verify a freelancer's bank statement for onboarding. The platform reviews income consistency to determine which opportunities the freelancer qualifies for. This is a legitimate use case. Both parties would benefit from secure, verified data sharing. The freelancer wants to prove their income without manual document handling. The platform wants verification without manually reviewing potentially fabricated PDFs.

But the platform isn't regulated by a financial sector regulator. Only entities regulated by RBI, SEBI, IRDAI, or PFRDA can participate as FIUs in the AA framework [10]. The gig platform, despite being a legitimate business with a genuine need for financial verification, cannot become an FIU under current rules. The freelancer is back to the manual process: downloading PDFs, redacting information, uploading files, and hoping for approval.

This limitation isn't arbitrary. Regulators want oversight over who accesses citizen financial data. That's reasonable. But it creates a sharp divide between regulated financial entities that can use modern infrastructure, and every other legitimate use case that cannot. The exclusion list is long:

Gig economy platforms that need income verification for contractor qualification. Landlords who need employment and income proof for rental applications. Unregulated fintech startups building innovative services. Employers verifying expense claims and reimbursements. Non-profit organizations providing financial assistance. Any entity outside the financial sector that has a legitimate need for verified financial data.

All of these use cases fall back to manual processes, unverifiable PDFs, and the trust problems we outlined earlier. The infrastructure exists. The standards work. The question is scope.

The Trust and Accountability Problem

But, since these are non-regulated entities, the issues with trust and accountability would still exist. Simply opening AA access to unregulated entities without additional safeguards would solve the verification problem but create new risks.

What would happen if the data is misused? Who is responsible?

For regulated entities participating in AA, the answer is clear. If a bank misuses customer data, the RBI has enforcement authority. If an insurance company violates consent boundaries, IRDAI can impose penalties. If a mutual fund platform fails to protect data, SEBI can take action. The regulatory framework provides oversight, establishes standards, enforces compliance, and penalizes violations. Users have recourse beyond trying to sue a financial institution.

For non-regulated entities, this oversight doesn't exist. A gig platform that gains AA access could theoretically request bank statement data for "income verification" and then analyze spending patterns for unrelated purposes. A landlord could ask for employment verification but retain transaction-level data indefinitely. An expense management platform could gather more data than necessary and monetize it through analytics services. Without regulatory oversight, there's no enforcement mechanism beyond legal recourse after the fact.

The consent problem shifts but doesn't disappear. Users might consent to data sharing for one purpose, but how do they enforce that purpose limitation? If a gig platform only needs to verify that income exceeds a threshold, how does the system prevent that platform from accessing full transaction histories? If a landlord only needs proof of employment, how do we ensure they can't see where the tenant spends money or what they buy?

The AA framework provides verified data and user-controlled consent. But it doesn't cryptographically enforce what recipients can do with that data once they receive it. Consent becomes a contractual agreement again rather than a technical constraint. We're back to trusting that entities will honor their stated purposes, with legal remedies as the only enforcement tool. For regulated entities under continuous oversight, that's acceptable. For the vast universe of unregulated entities, it's insufficient.

The Solution

A solution requires two components working together. Neither is sufficient alone. Both are necessary for self-consent to work at scale across regulated and unregulated entities.

Component 1: Control over sharing and usage by customer

Users need cryptographic control, not contractual promises. This means building technical mechanisms that enforce user decisions regardless of what receiving entities want to do.

  • Selective data sharing: Users should be able to share only what's needed for a specific purpose. If income verification requires knowing that monthly deposits exceed $5,000, the system shouldn't expose individual transaction amounts, merchant names, or spending patterns. The data shared should be the minimum necessary to answer the specific question.

  • Time-bound access: Consent should expire automatically based on the purpose. If a landlord needs employment verification for a rental application, access should end when the lease is signed or the application is rejected. There shouldn't be indefinite data retention requiring users to remember to revoke access later.

  • Revocable permissions: Users should be able to withdraw consent at any time and have that withdrawal enforced immediately. Not at the next billing cycle. Not after a grace period. The moment consent is revoked, access stops.

  • Audit trails: Users need visibility into who accessed what data, when, and for what stated purpose. This means making logs accessible to users in understandable format so they can monitor how their data is being used and detect violations.

Component 2: Proof that the data is verified and is coming from trusted sources

If data comes from the source institution with cryptographic proof of authenticity, the recipient doesn't need to guess whether it's real. This eliminates the entire category of trust problems we described earlier with falsified documents and manual verification.

Account Aggregator can solve for Component 2. Data pulled through AA is verified. It comes directly from FIPs (banks, insurers, tax authorities, pension funds) and is encrypted end-to-end. Recipients know the data is authentic because the architecture guarantees it. There's no opportunity for intermediate manipulation. The cryptographic signatures prove provenance.

This is why expanding AA access to non-regulated entities is attractive. It would immediately solve the verification problem for millions of use cases currently stuck with unverifiable PDFs. Gig platforms could trust income data. Landlords could verify employment. Expense systems could authenticate receipts.

However, governing usage, especially for unregulated entities, is even more critical. This is where privacy-preserving computation and consent-bound usage become essential. This is the piece that's missing.

If a gig platform only needs income verification, the system should be cryptographically unable to perform spending pattern analysis. Not contractually prohibited. Not against terms of service. Technically impossible. The computation infrastructure should enforce purpose limitation at the architectural level. If the consent says "verify income," then the only operation that can execute is an income verification query. Attempts to run other analyses fail at the technical layer.

If a landlord only needs proof of employment, transaction-level data shouldn't be accessible in any form. The system should return a cryptographically signed attestation that the person is employed at a specific company, without ever exposing individual salary payments or other transaction details. The landlord gets the verification they need. The user shares the minimum information necessary.

Purpose limitation becomes enforceable, not promised. The computation is bound to the consent. Unauthorized operations cannot run. This makes misbehavior technically impossible.

This is what binding consent to computation means in practice. The receiving entity gains access only to the specific computational results that the consent permits. They cannot run queries outside that scope. They cannot retain raw data beyond the specified timeframe. They cannot repurpose information for different analyses. The technical architecture enforces these constraints regardless of what the entity wants to do.

Closing

The path forward combines three elements, each building on the others to create a system where self-consent actually works:

User-centric designs. Put users in control with explicit, granular, revocable consent. Make the interface understandable so users can make informed decisions. Provide audit trails so they can monitor usage. Design workflows that respect user agency rather than treating consent as a checkbox to bypass. This is the foundation.

Leveraging existing frameworks. Expand AA's verification capabilities beyond regulated entities. The infrastructure exists. The standards work. The adoption metrics show it scales. The question isn't whether to build new systems from scratch. It's whether to extend proven frameworks to cover the full range of legitimate use cases. This solves the verification problem and eliminates the friction of manual data transfers.

Role of privacy. Bind consent to computation so that unauthorized operations cannot run. Purpose limitation becomes technically enforced, not legally promised. This is what makes extending AA access to unregulated entities safe. Without cryptographic enforcement of usage boundaries, expanded access creates new risks. With it, we can have both verified data and protected privacy.

Right now, we have frameworks that work for regulated entities. The infrastructure handles secure data transfer, consent management, and cryptographic verification. Extending those frameworks to non-regulated use cases would solve the friction and trust problems we've outlined. But extension without usage enforcement would be incomplete and potentially dangerous.

Self-consent solved the access problem. Users should control their data, not institutions. The next step is ensuring that data shared is verifiable and usage is enforceable. Verification without enforcement creates new attack surfaces. Enforcement without verification leaves the trust gap open. Both are necessary.

If consent can't be bound to both verification and computation, it's still just paperwork.

References

[1] Environics Research, "Data portability isn't a policy detail – it's a pressure release", Open Banking Expo, February 2026

[2] Erick Watson et al., "Open Banking's Next Phase: AI, Inclusion and Collaboration", FinTech Magazine, August 2025

[3] IRS, Tax Gap Projections for Tax Year 2022, Publication 5869, October 2024

[4] Fannie Mae Economic & Strategic Research, "Leveraging Variable and Gig Income to Expand Access to Homeownership", January 2025

[5] Point Predictive, "2025 Auto Lending Fraud Trends Report", March 2025

[6] NMHC/NAA, "Pulse Survey: Analyzing the Operational Impact of Rental Application Fraud and Bad Debt", February 2024

[7] Department of Financial Services, Government of India, "Account Aggregator Framework", December 2025

[8] The Digital Fifth, "Account Aggregator", October 2025

[9] Razorpay, "What Is an Account Aggregator? RBI AA Framework Explained", December 2025

[10] Sahamati, "Account Aggregator FAQ", October 2025

Read Part 1: [Consent Is Not Privacy: Why We Need to Rethink Data Protection]

Learn more: 

Open Finance Revisited: Strengthening Data Governance with Cryptographic Privacy and Auditability


Continue reading