top of page

AI FAQ – How AdvocacyEd Uses Artificial Intelligence Responsibly

What is AI, and why do you use it?

Artificial Intelligence (AI) is technology that helps analyze patterns in data—like documents, assessments, and performance reports—so we can identify insights more quickly and accurately.
At AdvocacyEd, AI is a supportive tool, not a replacement for professional expertise. Every finding and recommendation is reviewed by me personally, drawing on my experience in special education, compliance, and nonprofit management.

​

How does AI help families?

AI helps me analyze IEPs, 504 plans, and student progress data efficiently, highlighting inconsistencies or missing supports that might otherwise go unnoticed. It allows me to give families clearer, data-driven recommendations—helping parents become stronger advocates for their children.

 

How does AI help nonprofits?

For nonprofits, AI enables deeper, faster analysis of compliance data, accreditation readiness, and organizational trends. It supports the creation of dashboards, workforce models, and reports that drive strategic planning and quality improvement.

​

Do you store or share client data with AI systems?

No. Client data remains confidential and is never shared with third-party AI systems that retain or train on user information.


All analyses are performed using secure, compliant tools and encrypted storage consistent with FERPA, HIPAA, and GDPR-aligned standards.

​

How do you ensure confidentiality?

I use data minimization (analyzing only what’s necessary), encryption, and local or private AI models that do not transmit your files to open networks.


Firstly, I redact any personal information prior to submitting documents into any AI system, including private systems. Any shared data (like an IEP or treatment plan) is deleted once your report is finalized unless you request archival for ongoing support.

​

What about “AI hallucinations” or errors?

To ensure accuracy, I use Retrieval-Augmented Generation (RAG) techniques—meaning the AI only references verified, uploaded documents rather than generating unverified text from its own memory.


Every report is fact-checked and reviewed manually for compliance accuracy before it’s shared with clients.

​

Is my information used to train AI models?

No. AdvocacyEd’s systems are configured to prevent client data from being used in model training or shared with external servers.

​

Can I opt out of AI-assisted analysis?

Absolutely. Clients can request traditional, human-only document review or evaluation at any time. AI is a tool I use to enhance efficiency—not a requirement for service.

​

What is your ethical commitment?

I follow the Responsible AI Framework emphasizing fairness, transparency, accountability, and privacy. My commitment is to use AI only when it adds value, accuracy, or insight—and always in service of people, not process.

bottom of page