The DDQ didn't used to matter this much. For most of the private markets' history, it was a standardized checkpoint. Firms answered them manually, tracked responses in spreadsheets, and moved on. The process was slow, but the volume was manageable.
That world is over. ILPA's standard diligence framework has grown from eight sections to 21 comprehensive modules, and every crisis leaves its mark. 2020 saw a surge in questions around sustainability. Since the regional banking crisis in 2023, LPs have been asking about backup financial institutions. And now inquiries about AI as the technology permeates every industry. As Thomas Buley, founder of Sightglass and now Director of Product Management at Juniper Square, put it: "No LP ever says we're going to ask fewer questions. It's always additive."
The result is a document that can easily average 500 questions—Buley has seen as high as 750— require 40 to 60 hours of coordination across IR, finance, compliance, and legal teams—and still, according to Buley, represent asymmetric risk. "You might not win a mandate because you filled out a DDQ in the best way," he said, "but you can absolutely lose it by filling it out poorly."
That asymmetry is at the center of Juniper Square's acquisition of Sightglass—and the reasoning behind why Jay Farber, General Manager of GPX at Juniper Square, believes the industry has arrived at a genuine inflection point. "We are at a place where firms with connected data and AI-driven operations will separate permanently from those still waiting to see how it plays out."
The Jevons Paradox and the compounding diligence burden
There's a Jevons Paradox operating in the DDQ market. As AI is now making it more efficient for GPs to tackle DDQ requests, LPs are responding with even more requests
Larger institutions—those writing $100M+ checks—have always had leverage to demand comprehensive responses. But as the technological barrier to questionnaire creation has lowered, smaller LPs have begun to expect the same quality of response at the same speed. "As technology's gotten better," Buley observed, "more LPs are saying, well, this can't be that hard for you to answer my 50 questions. You can only meet that rise in demand with better technology."
The competitive stakes compound this dynamic. With capital increasingly concentrated among established managers—77.4% of private equity capital in H1 2025 was allocated to funds exceeding $1 billion—smaller and mid-market firms are under more pressure than ever to project operational maturity. A slow or inconsistent DDQ response doesn't just delay a close. It signals to an LP that the firm doesn't have the institutional infrastructure to be a long-term partner.
Why a horizontal AI tool isn't enough
The market has not been slow to identify the DDQ problem. Farber estimates that perhaps a dozen vendors offer some form of DDQ automation. But general-purpose platforms that add the private markets as a use case—what Farber and Buley describe as "horizontal" tools: broad, shallow, and built for the wrong buyer. Generalized AI vendors have also tried to solve this problem, but AI alone, without integration into a broader IR workflow, doesn't work either. Horizontal tools also have a high upfront cost and require manual onboarding of past DDQs onto the platform before they are useful. “No one signed up for these tools to spend dozens of hours manually preparing their answer bank and each DDQ for automation to start,” added Farber.
First-generation DDQ tools were built for B2B SaaS sales teams responding to security questionnaires and RFPs. They rely on keyword matching and fuzzy search to pull relevant answers from a content library—a fundamentally different problem than the one IR teams face. "When you get into fund-specific strategies that have different answers, and you have to have multiple compliance workflow steps," Buley said, "a lot of tools just don't meet the needs of regulated financial institutions."
The cost of a wrong answer in a DDQ isn't a lost software sale. It's a potential compliance violation or reputational breakdown at the start of what should be a decade-long partnership. For that reason, Sightglass was architected around two principles that general tools can't replicate: deterministic accuracy and fund-level data isolation.
"We actually do a lot of work to keep things deterministic upfront," Buley explained. "If you've answered this exact question before, we will give you your exact approved language. We're not generating anything. We use AI to save time and to pull in information from different sources—but we know you don't want to be creative when you're in IR and responding to a question that might have SEC fraud implications."
At the same time, fund-level data isolation matters in ways that horizontal tools typically ignore. A firm managing three funds with different strategies, vintages, and risk profiles needs to ensure that questions about dry powder, valuation methodology, or governance don't surface answers from the wrong fund.
The barbell and the broken org chart
The private markets are currently operating as a barbell—lean, growing firms on one end; established megafunds on the other. The DDQ burden looks different at each extreme, but both ends are breaking under the same structural weight.
For smaller GPs where the head of investor relations also handles marketing, compliance, and reporting, the problem is capacity constraints. A single IR professional cannot sustain the breadth and depth of diligence requests from LPs without something giving way. “All those hours spent each week answering DDQs are hours not spent engaging with LPs and developing an investor pipeline," said Buley. Historically, the result has been quiet triage: anchor LPs get thorough responses, while smaller checks get shortcuts. "In the old days, people might not respond to a specific question, and we'd see a lot of historical DDQs where 20% of the answers just say, 'see data room.' That is not the high-touch, real partner feeling an LP wants when they get their diligence questionnaire back,” he added.
The stakes of this triage are higher than they appear. Every LP, regardless of check size, is a potential anchor for a future fund. "The goal is to treat every LP like your anchor LP," Buley said. With AI-generated first drafts completing in a fraction of the time, firms can bring the same rigor to a 50-question questionnaire from a smaller investor as they would to a 500-question questionnaire from a pension. "They may not have an associate on their team," Buley said. "This is their associate."
At the other end of the barbell, megafunds face a different failure mode: institutional amnesia and coordination tax. When 50 people across three continents are contributing to DDQ responses, keeping everyone aligned on a current, compliance-approved answer to a standard question becomes a project management problem as much as a content problem. "There is a tremendous coordination tax," Buley observed, "of making sure everyone has the same information, pulling from the most up-to-date answer bank, using approved language, and using experts' time wisely." The most senior people on large IR teams aren't primarily knowledge workers—they're relationship managers. When they spend time on project management and version control, they're not spending it with LPs.
Farber framed the stakes in terms of institutional positioning: "If you are a small firm and you take three weeks to answer a DDQ, you look like a legacy shop. If you're a big firm and you provide inconsistent data across different investor experiences, you create a red flag for the LP."
The compounding interest of connected data
Most IR teams already manage a fragmented ecosystem. Data rooms sit in one place, the CRM in another, the investor portal somewhere else, and the accounting system in a fourth. A standalone DDQ tool adds a fifth. "It became very clear," Farber said, "that DDQ was on its way to just being the sixth or seventh system that IR teams were having to use that was disconnected and doing different things."
The cost of fragmentation isn't just the maintenance overhead. It's version drift. A document updated in the data room doesn't automatically sync to the DDQ answer bank. An LP's relationship status—their recommitment history, their scheduled meetings, their check size relative to the fund—isn't visible to the person drafting their questionnaire response. Data residing in the fund administration system—position-level details, performance figures, treasury information—requires regular manual import work to get into a standalone DDQ platform.
When DDQ automation is built into the same unified system that holds investor records, fund documents, and CRM data, those connections become automatic. "If you update a document in your data room or your investor data changes, JunieAI can know that instantly," Farber said. "You should be able to do your work in one system."
Buley described the compounding effect directly: "We believe we have a great DDQ product that is solving needs today with our ability to get siloed data—but when you get into the world of Juniper Square, where there is CRM data, where you know what LPs care about, where there is fund administration data and you have position-level details to respond to these—the product gets even better." More data, more context, more accurate responses.
That connected infrastructure also changes the economics of the IR function itself. The CFO—often an overlooked participant in DDQ workflows—is frequently pulled in to answer financial questions or verify data. Without a connected system, it looks like a partner reviewing Doc V7.4 on a Friday night, annotating in red pen and answering the same question for the fifth time. With one, it looks like a final review against a clean, pre-populated draft.
From tool to agent—and what that actually means
One of the biggest discussions in AI centers on the "agent"—a term that has expanded to encompass everything from a well-designed dashboard to a fully autonomous workflow.
The Juniper Square DDQ, as Buley describes it, is a DDQ agent for IR teams. The platform ingests a questionnaire, pulls relevant context from the firm's data across multiple sources, and returns a first draft—all without requiring the user to be present during that work. That background operation is the agentic step.
Farber offered a useful framing: "Juniper Square DDQ is your IR analyst: it does something meaningful and does it independently. That makes it an agent. That doesn't mean it can do everything right now, but it will do more and more over time."
The distinction has real implications for GPs evaluating where to trust automation. Farber was direct about where the bar should be: "Something that works 90% of the time does not work for an operational workflow that needs to work 100% of the time." That's the practical case against vibe-coded, internally-built alternatives—not that AI can't help with one-off queries, but that regulated, customer-facing, multiplayer workflows require production-grade infrastructure, audit trails, and compliance controls that a weekend build cannot provide.
The longer-term direction is toward greater autonomy—a future in which the system not only generates the first draft but also manages routing, tracks approvals, and delivers the final response to the LP. But that evolution requires LP and regulatory buy-in that isn't fully there yet. The fully agentic LP-GP interaction—where agents on both sides exchange and verify information without human intermediaries—remains, in Buley's estimation, "very far down the line for regulated GPs and LPs.” But Juniper Square is committed to building for that world, noted Farber, with a product roadmap centered on an AI-native future.
What will always remain is the most important part: the relationship. "Every partnership between an LP and a GP is based on relationships and shared accountability," Farber said. "AI gets everything else out of the way so that relationships can happen and partnerships can form." These partnerships, as Buley noted, often last longer than the average marriage. The technology's job is to clear the path—not to walk it.
In closing
In a fundraising environment where capital is more concentrated, timelines are longer, and LP expectations have compounded with every market event of the past decade, operational credibility has become inseparable from investment credibility. The DDQ is where that credibility is first tested.
The firms that treat this as an administrative problem are solving the wrong problem. The firms that treat it as a data problem, one that can only be solved at the platform level, are building the infrastructure that will define how they compete over the next decade.
"The competitive advantage doesn't go to the firm with the most employees," Farber said. "It goes to the firm with the most connected data."
The DDQ is the front door. What GPs build behind it determines who gets invited in.
IR teams are managing a growing volume of complex diligence requests with the same—or fewer—resources. Juniper Square's DDQ app, powered by JunieAI, ingests questionnaires in any format, generates a first draft from the firm's knowledge base, and returns a document ready for human review. Learn more →