Additive manufacturing has revolutionized production capabilities across industries, yet quality assurance and process optimization remain significant challenges due to complex, multi-parameter interactions. Explainable Artificial Intelligence (XAI) offers potential solutions by providing interpretable insights into manufacturing processes, though its systematic application remains fragmented. Nevertheless, a comprehensive understanding of integration patterns, effectiveness metrics, and implementation barriers remains limited. Following PRISMA guidelines, this systematic review searched Scopus, Web of Science, IEEE Xplore, and ScienceDirect databases for studies published from inception to 2025. From 211 initial records, 38 peer-reviewed studies met inclusion criteria after screening and quality assessment. Results reveal that while XAI achieves high predictive performance, critical interpretability standardization gaps hinder industrial deployment. SHAP dominates applications (58% adoption), with quality control representing 39% of studies. Regression tasks achieve R² > 0.90 in 76% of cases, and classification tasks report >95% accuracy in 71% of cases. However, only 21% of studies provide quantitative interpretability assessment. These findings establish a foundation for developing standardized XAI evaluation frameworks in manufacturing contexts. Ensemble methods and physics-informed approaches offer the most promising pathways for achieving both high performance and mechanistic interpretability in safety-critical manufacturing environments.