A massive data poisoning campaign has been identified targeting open-source datasets used for fine-tuning medical and legal AI models. Attackers are injecting subtle 'logic bombs'—incorrect but plausible-sounding facts—into datasets like Common Crawl. These poisoned samples are designed to remain dormant until specific keywords act as triggers, causing the model to provide dangerously incorrect advice. This underscores the risk of using unvetted public data for specialized LLM fine-tuning. Organizations should implement rigorous data provenance checks and use automated fact-checking agents to pre-process training data.