Extracting material property data from scientific text is pivotal for advancing data-driven research in chemistry and materials science; however, the extensive annotation effort required to produce training data for named entity recognition (NER) models for this task often makes it a barrier to extracting specialized data sets. In this work, we present a comparative study of the conventional, supervised NER methodology to alternative few-shot learning architectures and large language model (LLM)-based approaches that mitigate the need to label large training data sets. We find that the best-performing LLM (GPT-4o) not only excels in directly extracting relevant material properties based on limited examples but also enhances supervised learning through data augmentation. We supplement our findings with error and data quality assessments to provide a nuanced understanding of factors that impact property measurement extraction.