ABSTRACT This study investigates the potential of Multimodal Large Language Models to evaluate the quality of Unified Modelling Language (UML) class diagrams, with a focus on their ability to assess class structures and attribute information in alignment with object‐oriented design principles. Thirty‐four engineering students completed a design task involving the application of five object‐oriented design principles known collectively as the S.O.L.I.D. principles (Single Responsibility, Open/Closed, Liskov Substitution, Interface Segregation, and Dependency Inversion). Their solutions were independently assessed by three expert instructors and four Multimodal Large Language Models: ChatGPTChatGPT‐4, Gemini, Amazon AI, and Claude 3.5 Sonnet. Quantitative analysis compared AI‐generated scores to instructor consensus ratings using inter‐rater reliability metrics, while a grounded theory approach was used to qualitatively identify and classify AI evaluation errors. Results indicate that while MLLMs demonstrate promising partial scoring alignment with experts, they consistently exhibit significant limitations in semantic interpretation and evaluative reasoning, often leading to inconsistencies. These findings highlight that despite their potential, MLLMs are not yet reliable replacements for human expertise and underscore the critical need for improved model alignment with domain‐specific assessment practices. They also suggest future directions for carefully integrated hybrid instructor‐AI evaluation workflows in educational settings.