Content Sourcing Policy
Last updated: February 9, 2026
MindCast is a concept explainer platform that transforms publicly available information into structured audio learning experiences. This page explains how we source, verify, and attribute the information in our episodes.
1. What MindCast Is
MindCast is cognitive infrastructure for curious minds. We are not an AI wrapper, a book summary service, or a content aggregator. We are a media engine that turns information into knowledge through structured audio episodes.
Our platform:
- Explains concepts — We break down complex ideas from science, history, philosophy, psychology, economics, and technology into clear, engaging audio narratives.
- Synthesizes public knowledge — We draw from multiple openly available sources to present balanced, well-researched explanations.
- Promotes active learning — Our episodes include retrieval practice, reflection prompts, and spaced repetition to help listeners retain what they learn.
2. Our Sources
Every MindCast episode is built on research from publicly available, reputable sources. We prioritize the following categories:
2.1 Preferred Sources
These are the foundation of our content and are always prioritized in our research pipeline:
- Encyclopedias — Wikipedia, Stanford Encyclopedia of Philosophy, Britannica
- Academic institutions — University research papers, .edu domains, academic databases
- Government sources — CDC, NIH, NASA, NOAA, and other .gov domains
- Open-access journals — arXiv, PLOS, PubMed, and other open research repositories
- Public domain texts — Project Gutenberg, Internet Archive, public domain literature
2.2 Acceptable Sources
These are reputable sources we use to supplement and contextualize information:
- Major publishers — The New York Times, BBC, Reuters, The Guardian, Nature, Science
- Established knowledge platforms — TED, Khan Academy, Coursera, edX
- Recognized thought leaders — Established blogs and publications with known expertise
2.3 Restricted Sources
We limit our reliance on these sources and flag content that draws heavily from them:
- Social media posts and threads
- Forums and community discussions
- Personal blogs without established expertise
- Paywalled content we cannot independently verify
2.4 Blocked Sources
We never use content from these sources and actively filter them from our research pipeline:
- Piracy sites or unauthorized reproductions of copyrighted works
- Content farms and low-quality aggregators
- AI-generated content farms
3. What We Don't Do
MindCast is designed to explain concepts, not to reproduce or summarize protected works. Specifically:
- We don't summarize copyrighted books — If you ask for a book summary, we reframe the request into a concept explainer that covers the underlying ideas using publicly available knowledge.
- We don't reproduce protected articles — We synthesize information from multiple sources rather than reproducing any single source.
- We don't claim original research — We are explainers and educators, not primary researchers. All information is attributed to its sources.
- We don't use pirated content — Our research pipeline automatically detects and removes sources from known piracy sites.
4. Attribution
Transparency is core to our mission. Every MindCast episode includes:
- Source citations — All sources used in research are listed alongside the episode, with titles, authors, and links where available.
- Source classification — Each source is categorized (encyclopedia, academic, publisher, open license, public domain, or web) so listeners can evaluate the quality of our research.
- AI transparency — We use AI models to research, draft, fact-check, and narrate episodes. The generation pipeline is visible in the job progress view.
5. Our Quality Pipeline
Every episode goes through a multi-stage quality assurance process:
- Research — In-depth research from preferred and acceptable sources
- Source Policy Check — Automated classification of all sources; blocked sources are removed
- Dual Draft + Judge — Two independent AI drafts are generated and evaluated by a judge model
- Support Check — Verifies the script stays true to the research (prevents drift)
- RAG Grounding — Cross-references claims against source material
- Automated Fact-Check — LLM-powered verification of all claims
- Enhancement — Narrative polish, de-AI voice treatment, and audio optimization
6. Canon Content Standards
Our Canon Library — the curated collection of permanent episodes — has additional requirements:
- At least 2 preferred or acceptable sources
- Zero blocked sources
- Less than 50% restricted sources
- Quality gate scoring across accuracy, engagement, and clarity
7. DMCA and Takedown Requests
If you believe any MindCast content infringes on your intellectual property, please see our DMCA Policy or contact us at info@alstonanalytics.com.
8. Contact
Questions about our content sourcing approach? Contact us at info@alstonanalytics.com.