Fine-grained entity recognition (FGER) attracts increasing attention in information extraction and many other natural language understanding applications. However, it is a quite challenging problem for a specific domain due to the lack of specific-domain labeled data. To address this challenge, recent advancements in language modeling such as generative pretrained transformer (GPT) offer promising alternatives. Since large language models (LLMs) can be used for various tasks, such as text generation, summarization, and information extraction without labeled data, we incorporated them into the FGER field. Nonetheless, when too many verbose labels are fed to LLMs simultaneously, LLMs occasionally generate content that diverges from user input, contradicts previously generated context, or misaligns with established world knowledge, also called the "hallucination" phenomenon. In this article, we propose a new method called FGER-GPT to address these issues. Our approach leverages multiple inference chains and incorporates a hierarchical strategy for recognizing fine-grained entities, resulting in a significant performance boost. Importantly, neither coarse-grained nor fine-grained entity annotations are used in our proposed approach, which avoids the heavy labor consumption of labeling. Extensive experiments conducted on widely used datasets have demonstrated that the proposed FGER-GPT achieves competitive performance compared to state-of-the-art approaches in low-resource scenarios, highlighting its feasibility for real-world applications.