Physics-Informed Neural Networks (PINNs) represent a transformative approach to solving partial differential equation (PDE)-based boundary value problems by embedding physical laws into the learning process, addressing challenges such as non-physical solutions and data scarcity, which are inherent in traditional neural networks. This review analyzes critical challenges in PINN development, focusing on loss function design, geometric information integration, and their application in engineering modeling. We explore advanced strategies for constructing loss functions—including adaptive weighting, energy-based, and variational formulations—that enhance optimization stability and ensure physical consistency across multiscale and multiphysics problems. We emphasize geometry-aware learning through analytical representations—signed distance functions (SDFs), phi-functions, and R-functions—with complementary strengths: SDFs enable precise local boundary enforcement, whereas phi/R capture global multi-body constraints in irregular domains; in practice, hybrid use is effective for engineering problems. We also examine adaptive collocation sampling, domain decomposition, and hard-constraint mechanisms for boundary conditions to improve convergence and accuracy and discuss integration with commercial CAE via hybrid schemes that couple PINNs with classical solvers (e.g., FEM) to boost efficiency and reliability. Finally, we consider emerging paradigms—Physics-Informed Kolmogorov–Arnold Networks (PIKANs) and operator-learning frameworks (DeepONet, Fourier Neural Operator)—and outline open directions in standardized benchmarks, computational scalability, and multiphysics/multi-fidelity modeling for digital twins and design optimization.