From Scarcity to Abundance: Scholars and Scholarship in an Age of Generative Artificial Intelligence

奖学金 稀缺 背景(考古学) 生成语法 社会学 工艺 人工智能 经济 计算机科学 政治学 历史 法学 考古 微观经济学
作者
Matthew Grimes,Georg von Krogh,Stefan Feuerriegel,Floor Rink,Marc Gruber
出处
期刊:Academy of Management Journal [Academy of Management]
卷期号:66 (6): 1617-1624 被引量:2
标识
DOI:10.5465/amj.2023.4006
摘要

Academy of Management JournalVol. 66, No. 6 From the EditorsFree AccessFrom Scarcity to Abundance: Scholars and Scholarship in an Age of Generative Artificial IntelligenceMatthew Grimes, Georg von Krogh, Stefan Feuerriegel, Floor Rink and Marc GruberMatthew GrimesUniversity of Cambridge, Georg von KroghETH Zurich, Stefan FeuerriegelLMU Munich, Floor RinkUniversity of Groningen and Marc GruberEcole Polytechnique Fédérale de LausannePublished Online:19 Dec 2023https://doi.org/10.5465/amj.2023.4006AboutSectionsPDF/EPUB ToolsDownload CitationsAdd to favoritesTrack Citations ShareShare onFacebookTwitterLinkedInRedditEmail Generative artificial intelligence (AI) refers to a class of machine learning technologies that have the capability to generate new content that resembles human-created output, such as images, text, audio, and videos. Within the context of generative AI, much hype has been directed in particular toward large language models, given the associated tools’ ability to generate coherent and contextually relevant text given a user-defined prompt. As editors of a journal devoted to advancing knowledge production in the areas of management and organizations, we believe it is important to recognize the promise of generative AI for advancing such knowledge production. In particular, we see opportunities for scholars to increasingly make use of generative AI to assist with the entire value chain of knowledge production, from synthesis, to creation, to evaluation and translation (Bartunek, Rynes, & Daft, 2001; Kilduff, Mehra, & Dunn, 2011; Van De Ven & Johnson, 2006). Moreover, these same tools promise to increase both the efficiency and rigor of our research methods.Even as we recognize the promise of applying this technology to the craft of scholarly knowledge production, we also recognize the potential perils of doing so. The integrity of the scholarly research process depends upon methodological transparency and reliability. Yet implementation of large language models at scale poses significant risks to such transparency and reliability, even as it introduces additional concerns regarding accuracy, such as “AI hallucinations” and what we are calling “deep research fakes” (Walters & Wilder, 2023). Where the former refers to coherent, contextually relevant, yet false information provided by generative AI that may deceive the researcher, the latter refers to the intentional use of generative AI to create and manipulate data in order to deceive the scholarly community. Moreover, as with all forms of craft, as technology begins to increasingly mediate the human-centered foundations of scholarship, this is likely to raise questions of “authentic” scholarship (Kroezen, Ravasi, Sasaki, Żebrowska, & Suddaby, 2021; Voronov, Foster, Patriotta, & Weber, 2023), and indeed under what conditions human judgment may be necessary to uphold such authenticity.Taken together, the promise and peril of generative AI raise a number of critical questions. First, how might generative AI be used to increase both the quality and quantity of interesting and important scholarship? Second, given the aforementioned tensions surrounding its usage, why should we caution the usage of generative AI? To date, prior editorials and publications from across the academic profession, including a recent Academy of Management Journal editorial (Dwivedi et al., 2023; Susarla, Gopal, Thatcher, & Sarker, 2023; Thorp, 2023; von Krogh, Roberson, & Gruber, 2023), have begun to consider both of these pragmatic and normative questions in light of both the underlying technology and methodology. However, given the broad and nontechnical appeal of generative AI through tools such as ChatGPT and Microsoft Copilot, we see an opportunity to advance this conversation toward a broader and more future-oriented investigation as to the implications of generative AI for the scholarly profession, and for journals like the Academy of Management Journal in particular. To be clear, as we address this third and final question, our intention is not to codify our profession’s response to the future development of this technology, but rather to expose various possible uncertainties that will affect its development and its implications for scholars and scholarship. In other words, we are merely at the beginning of a conversation we expect to be having for many years to come.HOW MIGHT GENERATIVE AI BE USED?In order to understand how generative AI might be used to enhance the quality and quantity of scholarly knowledge production, it is worth deconstructing the scholarly process into its constitutive parts, which we suggest entail (a) knowledge synthesis, (b) knowledge development, (c) knowledge evaluation, and (d) knowledge translation. While some of these steps rely more heavily on “know-what”—the knowledge of data, information, concepts, and theories—other steps rely more on “know-how”—procedural (often tacit) knowledge of how to perform scholarship. In the following paragraphs, we will discuss what we perceive as the promise of generative AI for enhancing each of these steps in the process of scholarship.Knowledge SynthesisThe process of scholarship often starts with the surfacing of theoretically interesting and practically important questions. As such, much of the training provided in PhD programs involves exposure to broad literatures with the aim of familiarizing students with the accumulated body of knowledge in management research, as well as its frontiers, so that they might identify opportunities for meaningfully advancing that knowledge (Dencker, Gruber, Miller, Rouse, & von Krogh, 2023). Such advances are often thought to be more interesting if they are accompanied by theoretical or practical puzzles (Tihanyi, 2020). Moreover, interesting questions are often thought to lie at the intersection of multiple literatures (Grant & Pollock, 2011). While academics often experience efficiencies in knowledge synthesis work within and across literatures over time, such efficiencies can take years to realize. Conversely, emerging generative AI tools now increasingly offer capabilities aimed to increase those efficiencies and the pace at which those efficiencies are realized by scholars (Heidt, 2023). In addition, while the process of discerning or surfacing interesting research questions is often thought to be the purview of human creativity, recent studies have highlighted how GPT-4 can outperform 90% (Haase & Hanel, 2023) and 99% (Guzik, Byrge, & Gilde, 2023) of people on different creative tests. The extent to which generative AI will augment or replace academics in the creative tasks associated with scholarship is a matter of debate (indeed, the authors of this editorial have internally expressed such debate), yet the potential should be taken seriously.Knowledge DevelopmentThe second stage in the process of scholarship often requires knowledge development work, such as an ability to collect, model, and analyze data, and then crafting parsimonious arguments that simply convey complex phenomena and relationships. Here, generative AI promises to profoundly affect the methodological process, which underpins knowledge development work through increases in both efficiency and rigor. With respect to efficiencies, we envision that generative AI will likely lower the barriers to methodological competence, while also speeding up the process of executing on single research tasks. In terms of data collection, we can imagine scholars making use of generative AI to assist with research design as well as instrument design (e.g., survey questionnaires, interview guides). In terms of data analysis, simple prompts or sequences of prompts may be sufficient to engage in sophisticated inductive exploration of large quantitative or qualitative data sets (Gamieldien, Case, & Katz, 2023; Wang et al., 2023). Similarly, simple prompts combined with the above-referenced knowledge synthesis work may be sufficient to not only quickly surface a set of credible yet interesting hypotheses but also follow on to systematically test those hypotheses (von Krogh et al., 2023; Wang et al., 2023). Altogether, these developments enable an accelerated pace of research, leading to what may be called an “instant research paradigm” in management research (Gruber, 2023: 3).With respect to methodological rigor, there are a number of plausible ways in which generative AI might be used to enhance reliability and validity. For instance, for experimental research, generative AI may be used to produce augmented data to test hypotheses in various scenarios, ensuring that findings are consistent and reliable across diverse data sets. It may be used to identify inconsistencies in data, thus ensuring that the data used in analyses is of high quality, and thereby improving the reliability of results. It may be used to create complex simulation models or scenarios that might be difficult to examine in real-world conditions, potentially enhancing external validity. Further, it might be used as a source of researcher feedback, identifying and notifying researchers of potential biases or errors in their methods that could undermine reliability and validity.Beyond the ability to improve data collection and exploration, these tools also offer researchers the potential to improve argumentative clarity while developing hypotheses and propositions. Through outline generation, language enhancement, and proactively identifying and responding to counterarguments, generative AI promises to increase the persuasiveness of scholars’ knowledge development work.Knowledge EvaluationIn the same way that generative AI might enhance the ability of authors to proactively discern deficiencies in both their analysis and argumentation, so too might it assist journal editors and reviewers to identify otherwise overlooked concerns as they engage in knowledge evaluation. As far back as 2006, Chet Miller wrote an editorial for the Academy of Management Journal discussing issues such as reviewer hostility, bias, and dissensus, as well as some of the mitigating approaches journals could continue to take to overcome those issues (Miller, 2006). By providing reviewers and editors with additional feedback on the quality of submitted manuscripts, as well as on the quality of reviews, it is possible that generative AI could be used to further mitigate peer-review concerns. Moreover, such generative AI could be used to match reviewers based on expertise, thus mitigating the likelihood of bias due to lack of topical or methodological familiarity.Knowledge TranslationFinally, the process of scholarship often benefits from knowledge translation work—efforts to ensure the broad accessibility of that knowledge. Within most management journals such translation activities are often focused on converting specific empirical findings into broader theoretical contributions. Yet, increasingly, many scholars have advocated for improvements in such translation efforts aimed at closing the theory–practice divide and ensuring the responsible impact of our scholarship (Ployhart & Bartunek, 2019; Tsui, 2021; Wickert, 2021). Yet, despite these calls, our professional incentives often encourage less focus on such practical translation efforts (Aguinis, Archibold, & Rice, 2022). Additionally, even as the study of management and organizations has globalized, much scholarship remains siloed by language barriers. However, once again, here, generative AI promises to increase the efficiency of such translation efforts through simple prompts (Ahuja et al., 2023; Anderson, Kanneganti, Houk, Holm, & Smith, 2023).WHY SHOULD WE CAUTION USAGE OF GENERATIVE AI?Although there is great promise for using generative AI to increase both the quality and the efficiency of scholarship, the risks are manifold. First, generative AI, and specifically large language models, are trained on massive data sets of text and code. These data can include a variety of sources, such as news articles, social media posts, and academic papers. Using these large data sets, generative AI formulates responses based on patterns rather than specific sources. Yet the actual data and algorithms from which these patterns are deciphered are often “black boxed,” meaning that it is difficult to understand how the conclusions are arrived at. This lack of transparency can make it difficult to assess the reliability and validity of the generated results. Moreover, given this lack of transparency and the constant updating of the model and underlying data structures, it can be difficult for other researchers to replicate results or to identify any potential methodological flaws. This problem is further magnified when humans are interfacing with proprietary systems and are thus removed from the inner workings of the deep learning processes that power generative AI.Second, and relatedly, contemporary versions of generative AI rarely offer and source evidence to substantiate the arguments put forth. Moreover, at least at the moment, generative AI does not (yet) truly understand the use of language in specific contexts in the way humans do (Wittgenstein, 1953). Thus, generative AI would struggle to understand the nuances of a particular academic field or the latest groundbreaking research. Without this deep understanding, it is questionable whether generative AI could reliably provide or source the most current or relevant evidence. In academic research, where precision and accuracy are paramount, this of course raises significant concerns.Third, as previously noted, generative AI has a tendency to hallucinate, whereby it produces persuasive, albeit factually inaccurate and misrepresented, information. This occurs because contemporary generative AI models are probabilistic in nature, such that the AI model makes a prediction about what a meaningful answer might look like yet without applying any forms of logical or contextual reasoning. In academic contexts, such hallucinations could have serious repercussions if they were to go unchecked, including an increase in retractions; damage to author, journal, and professional reputation; as well as ripple effects involved in misleading future research.Fourth, generative AI’s ability to produce human-like text, images, and even videos has opened the door to the creation of “deep fakes” in various contexts, including academic scholarship (Liverpool, 2023). For instance, generative AI could be used to fabricate both quantitative and qualitative data sets that appear legitimate but are entirely fabricated. This could include fake experimental results, surveys, or observational data. Beyond such foundational fabrications, we see the potential of generative AI to produce fake citations, as well as to manipulate images and graphs—all in the service of supporting false claims.Fifth, and finally, we see risks not only with the process of authoring scholarship but also with reviewing and evaluating it. Should reviewers or editors start making increased use of generative AI to summarize, critique, and evaluate manuscripts, we would expect additional challenges for journals in avoiding both “false negatives,” wherein potentially groundbreaking work is rejected, as well as “false positives,” wherein inaccurate, biased, or even fabricated studies are accepted.Taken together, these risks are profound, demanding thoughtful responses from our profession and from the journals that currently uphold the integrity of academic knowledge production. The Academy of Management, as our field’s largest professional organization, will soon publish guidelines regarding the use of generative AI for its set of journals and the conference submissions. At the moment, however, many existing journal policies surrounding generative AI appear to be operating on the assumption that authors, reviewers, and editors will act in good faith. Yet, given both the severity and probability of the aforementioned risks, as well as the highly dynamic nature of the underlying technological developments, we believe such an assumption is inadequate, instead requiring additional and evolving governance designed to deter bad practice. Such governance might, for instance, include AI training for authors and reviewers, specialized review protocols for papers that employ generative AI, the use of AI-assisted verification systems, and periodic audits. While such guardrails would need further debate and discussion, we also see this as an opportunity for further research, whereby management and organizational scholars might investigate both the technological and organizational steps that would reduce the aforementioned risks.WHAT ABOUT THE PROFESSION?In a number of ways, professions are like clubs. Most professions maintain not only barriers to entry but also barriers to advancing within those professions. Such barriers to entry and advancement often entail costly individual efforts to demonstrate relevant expertise and quality output. The academic profession, in this sense, is no different.At present, the academic profession is structured around the presumed scarcity of rigorous scholarly knowledge production, including the generation of new ideas and methods. Journals acquire status within the profession based not only on their impact but also on their exclusivity, as they raise the standards for what qualifies as a novel and important contribution. Tenure-track faculty are competitively hired and promoted based on the perceived quality and quantity of their scholarship, wherein the exclusivity of a given journal is often used as a proxy for assessing that quality. Ultimately, then, the management academic profession is structured around the assumption that scholars have specific knowledge (both “know-what” and “know-how”) that is lacking not only in the public but also among other management professionals, including consultancies.Despite such presumed scarcity, our academic profession is interested in producing and disseminating more knowledge, as long as certain quality standards are upheld. In addition, as noted earlier, generative AI now promises to lower the barriers for individuals to more efficiently engage in rigorous scholarly research. For instance, a recent working paper (Dell’Acqua et al., 2023) highlighted how the use of GPT-4 within Boston Consulting Group not only significantly increased the quality of knowledge outputs but also decreased the gap between the bottom-half and top-half of performers (from 22% to 4%). Whether we expect exponential or more linear rates of improvement in the technology and its associated scholarship-enabling capabilities is, of course, relevant. Yet, in either case, the profession—and indeed our journals—must give thought to the changes on the horizon, the uncertainties that will intersect with those changes, and the internal responses needed. To provoke this consideration we pose two questions, given the potential promise of generative AI to increase both the quantity and quality of scholarship: (a) What does it mean to be a “scholar” when the “know-what” and “know-how” barriers to becoming one are minimized (i.e., anyone who wants to can participate in “scholarship”)? and (b) What does it mean to be a journal that publishes “scholarship” when the field is flooded with manuscripts that meet the highest possible human-mediated standards for (i) practical importance, (ii) theoretical intrigue, and (iii) methodological rigor? We believe that these questions necessitate a degree of scenario planning, in which we attempt to envision and prepare for multiple possible and uncertain futures.Scenario PlanningTo consider different possible scenarios, it is worth clarifying what is it that we assume to be true, and what is it that we assume to be uncertain yet also likely material to understanding the future professional effects of generative AI.First, we believe that it is inevitable that generative AI will become increasingly used to support scholarly knowledge production; we also believe that such usage will simultaneously improve the quality and quantity of knowledge production while also accentuating the risks associated with scholarly malpractice. Second, while there are a host of future uncertainties that are likely to affect the development and adoption of generative AI, we see two particular uncertainties as critical to the technology’s impact on the academic profession: systems transparency and societal regulation.We define systems transparency in this context as the clarity and comprehensibility of the data and methods employed in utilizing generative AI systems, ensuring that other researchers can evaluate, and potentially replicate, the processes and decisions made throughout the research. A recent study, for instance, developed a transparency index by coding 10 foundation models, including many notable generative AI models, on 100 different indicators of transparency related to the resources and data used to build them, the models themselves, and their usage (Bommasani et al., 2023). While scholars will inevitably debate the specific indicators and evidence of such an index, the study usefully highlighted the uncertain and highly varied evolution of generative AI as it pertains to systems transparency. From the perspective of scholarly replicability and reproducibility, the conventional assumption is that we need access to all data and all model details (Shrestha, von Krogh, & Feuerriegel, 2023). Yet, as the study highlighted, most privately owned models fall well short of fulfilling these assumptions (Bommasani et al., 2023).Separately, the second source of uncertainty we explore, societal regulation, can be understood as the extent of governmental legislation restricting the development or usage of generative AI. Many global regulatory bodies, for example, are concerned with the potentially disruptive forces associated with generative AI, and may very well look to cull such disruption. For instance, it may be that societies look to restrict usage to particular domains of society, or to regulate transparency by requiring developers to provide such transparency or performing audits on the algorithms. It may be that the input data for generative AI models become increasingly subject to data privacy restrictions. Finally, it may be that all outputs are regulated, for instance, by way of watermarking.Combined, these two dimensions of uncertainty are likely to implicate our profession of management academia in different ways, which we explore in the following four scenarios (see Figure 1).FIGURE 1 Scenario Illustrations of the Impact of Generative AI on the Academic ProfessionScenario 1—Low systems transparency–low societal regulation.In this scenario, both systems transparency and societal regulation are limited. The underlying algorithms of generative AI systems are proprietary and closely guarded, making it challenging for academics to fully comprehend the system’s functioning. At the same time, there are minimal legislative restrictions on the usage of AI, and it is widely adopted across academia and other sectors without stringent public oversight.The lack of systems transparency poses significant challenges for academia. Researchers may become overly reliant on AI-generated knowledge without fully understanding the underlying processes, leading to potential blind spots and biases in their work (De-Arteaga, Feuerriegel, & Saar-Tsechansky, 2022). Moreover, the absence of societal regulation means that there is limited accountability for the use of AI in research, which could result in the dissemination of misinformation or unethical practices.In a situation of low systems transparency, the credibility and accountability of AI-generated knowledge might be questionable. On the one hand, this could lead to a lack of public trust in AI-generated research; however, it may also lead to an even greater need for academic experts to continue to hold a significant role in knowledge production and verification. The academic profession’s exclusivity may remain relatively stable, as the reliance on human expertise remains crucial to maintaining quality and reliability in academic knowledge.Scenario 2—High systems transparency–low societal regulation.In this scenario, AI developers prioritize systems transparency, allowing for comprehensive understanding of AI systems by experts. This transparency may be underpinned by open-source licensing and communal development principles applied to the creation and deployment of generative AI (Shrestha et al., 2023). However, there is limited societal regulation in place to control the usage of generative AI. The rapid advancement of AI technology outpaces the development of appropriate regulations, and there is a strong push for innovation and unrestricted implementation.With high systems transparency, AI-generated research may still be subject to scrutiny and validation, but the absence of strict societal regulation could lead to widespread use of AI-generated academic content. This scenario might result in an exponential increase in knowledge production, as the process becomes more accessible to a larger number of knowledge workers. Consequently, the academic profession’s exclusivity may decrease as the barriers to knowledge creation and dissemination are significantly lowered.Scenario 3—Low systems transparency–high societal regulation.In this scenario, there is strong societal regulation regarding the usage of generative AI, but the systems themselves remain opaque and difficult to comprehend. Governments and institutions impose strict rules to mitigate the potential risks and negative consequences associated with AI deployment.While the high level of societal regulation aims to protect against misuse and unethical practices, the lack of systems transparency poses challenges for the academic community. Academics may be hesitant to rely on AI-generated knowledge without a deeper understanding of how the systems operate. This cautious approach may limit the integration of AI into research practices, but it also serves as a safeguard against potential pitfalls associated with unscrupulous AI use.In a context of low systems transparency and high societal regulation, human academics may still hold a central role in the academic profession. The focus could shift toward integrating AI-generated knowledge with human expertise, where AI is used to assist human researchers. The exclusivity of the academic profession may be preserved, as the emphasis on maintaining high standards and ethics in knowledge production remains paramount.Scenario 4—High systems transparency–high societal regulation.In this scenario, novel generative AI models are developed with a strong focus on transparency, allowing humans, including academics, to more fully deconstruct and understand the underlying data, models, and usage. The algorithms behind generative AI systems are inspectable, widely shared, well-documented, subject to rigorous scrutiny, and perhaps open-sourced. This level of transparency ensures that biases and errors can be identified and corrected, contributing to high-quality knowledge production.At the same time, there is a high level of societal regulation governing the usage of generative AI. Governments and institutions place strict controls and guidelines on AI research and deployment to prevent misuse and potential harm. Societies are cautious about the rapid advancements of AI and are concerned about its potential impact on social structures, labor markets, and privacy. As a result, AI-generated research and knowledge undergo thorough evaluation and ethical review, protecting academic integrity and the public interest.With high systems transparency, the AI-generated knowledge will be more accountable and more easily scrutinized. This may lead to a situation where the AI-generated research can be effectively challenged and validated by human researchers. Consequently, the role of human academics may shift from traditional knowledge production to knowledge verification, oversight, and translation, and ensuring impact from that knowledge. The exclusivity of the academic profession might decrease as journals are more likely to accept and trust the knowledge production and synthesis from accessible AI tools.CONCLUSIONThe knowledge economy has been responsible for unprecedented growth and human flourishing. Within this knowledge economy, the academic profession has played a pivotal role in the creation of new knowledge, leading to technological advancements, new methodologies, and a deeper understanding of the world around us. Generative AI represents a leap forward both in our ability to engage in such knowledge production and in making the process of knowledge production more accessible to others.Our investigation of the implications of generative AI for management scholarship and for our profession is not meant as a call to arms to defend the profession and its current boundaries. Instead, in the short-term, we view this as a call to prepare ourselves, as well as our current and future PhD students, with the appropriate knowledge not only to use but, more critically, to evaluate algorithmic knowledge production. For instance, scholars need to be trained in the risks of using generative AI such as large language models for scholarship, the ethics of transparent usage, and the methodological competencies for ensuring scholarly integrity while using such powerful yet currently opaque tools. In addition, in the long term, we view this editorial as a call to rethink the distinctive value of our profession in a world of abundant management scholarship. In other words, we suspect that a plausible generative AI-led shift from scarce academic knowledge production to abundant academic knowledge production will inevitably increase the urgency around answering a fundamental question: To what problems in society is management scholarship the (unique) solution?REFERENCESAguinis, H., Archibold, E. E., & Rice, D. B. 2022. Let’s fix our own problem: Quelling the irresponsible research perfect storm. Journal of Management Studies, 59: 1628–1642. Google ScholarAhuja, K., Hada, R., Ochieng, M., Jain, P., Diddee, H., Maina, S., Ganu, T., Segal, S., Axmed, M., & Bali, K. 2023. Mega: Multilingual evaluation of generative AI. arXiv. Google ScholarAnderson, L. B., Kanneganti, D., Houk, M. B., Holm, R. H., & Smith, T. 2023. Generative AI as a tool for environmental health research translation. GeoHealth, 7: e2023GH000875. Google ScholarBartunek, J. M., Rynes, S. L., & Daft, R. L. 2001. Across the great divide: Knowledge creation and transfer between practitioners and academics. Academy of Management Journal, 44: 340–355. Google ScholarBommasani, R., Klyman, K., Longpre, S., Kapoor, S., Maslej, N., Xiong, B., Zhang, D., & Liang, P. 2023. The foundation model transparency index. arXiv. Google ScholarDe-Arteaga, M., Feuerriegel, S., & Saar-Tsechansky, M. 2022. Algorithmic fairness in business analytics: Directions for research and practice. Production and Operations Management, 31: 3749–3770. Google ScholarDell’Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. 2023. Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. Working Paper No. 24-013, Harvard Business School Technology & Operations Management Unit. Google ScholarDencker, J. C., Gruber, M., Miller, T., Rouse, E. D., & von Krogh, G. 2023. Positioning research on novel phenomena: The winding road from periphery to core. Academy of Management Journal, 66: 1295–1302.Link , Google ScholarDwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuga, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., …, Wright, R. 2023. Opinion paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71: 102642. Google ScholarGamieldien, Y., Case, J., & Katz, A. 2023. Advancing qualitative analysis: An exploration of the potential of generative AI and NLP in thematic coding. SSRN Electronic Journal. Google ScholarGrant, A. M., & Pollock, T. G. 2011. Publishing in AMJ —Part 3: Setting the Hook. Academy of Management Journal, 54: 873–879.Link , Google ScholarGruber, M. 2023. From the editors—An innovative journal during transformational times: Embarking on the 23rd editorial term. Academy of Management Journal, 66: 1–8.Link , Google ScholarGuzik, E. E., Byrge, C., & Gilde, C. 2023. The originality of machines: AI takes the Torrance Test. Journal of Creativity, 33: 100065. Google ScholarHaase, J., & Hanel, P. H. P. 2023. Artificial muses: Generative artificial intelligence chatbots have risen to human-level creativity. arXiv. Google ScholarHeidt, A. 2023. Artificial-intelligence search engines wrangle academic literature. Nature, 620: 456–457. Google ScholarKilduff, M., Mehra, A., & Dunn, M. B. 2011. From blue sky research to problem solving: A philosophy of science theory of new knowledge production. Academy of Management Review, 36: 297–317.Link , Google ScholarKroezen, J., Ravasi, D., Sasaki, I., Żebrowska, M., & Suddaby, R. 2021. Configurations of craft: Alternative models for organizing work. Academy of Management Annals, 15: 502–536.Link , Google ScholarLiverpool, L. 2023. AI intensifies fight against “paper mills” that churn out fake research. Nature, 618: 222–223. Google ScholarMiller, C. C. 2006. Peer review in the organizational and management sciences: Prevalence and effects of reviewer hostility, bias, and dissensus. Academy of Management Journal, 49: 425–431.Link , Google ScholarPloyhart, R. E., & Bartunek, J. M. 2019. Editors’ comments: There is nothing so theoretical as good practice—A call for phenomenal theory. Academy of Management Review, 44: 493–497.Link , Google ScholarShrestha, Y. R., von Krogh, G., & Feuerriegel, S. 2023. Building open-source AI. Nature Computational Science. Forthcoming. Google ScholarSusarla, A., Gopal, R., Thatcher, J. B., & Sarker, S. 2023. The Janus effect of generative AI: Charting the path for responsible conduct of scholarly activities in information systems. Information Systems Research, 34: 399–408. Google ScholarThorp, H. H. 2023. ChatGPT is fun, but not an author. Science, 379: 313. Google ScholarTihanyi, L. 2020. From “that’s interesting” to “that’s important.” Academy of Management Journal, 63: 329–331.Link , Google ScholarTsui, A. S. 2021. Responsible research and responsible leadership studies. Academy of Management Discoveries, 7: 166–170.Link , Google ScholarVan De Ven, A. H., & Johnson, P. E. 2006. Knowledge for theory and practice. Academy of Management Review, 31: 802–821.Link , Google Scholarvon Krogh, G., Roberson, Q., & Gruber, M. 2023. Recognizing and utilizing novel research opportunities with artificial intelligence. Academy of Management Journal, 66: 367–373.Link , Google ScholarVoronov, M., Foster, W., Patriotta, G., & Weber, K. 2023. Distilling authenticity: Materiality and narratives in Canadian distilleries’ authenticity work. Academy of Management Journal, 66: 1438–1468.Link , Google ScholarWalters, W. H., & Wilder, E. I. 2023. Fabrication and errors in the bibliographic citations generated by ChatGPT. Scientific Reports, 13: 14045. Google ScholarWang, H., Fu, T., Du, Y., Gao, W., Huang, K., Liu, Z., Chandak, P., Liu, S., Van Katwyk, P., Deac, A., Anandkumar, A., Bergen, K., Gomes, C., Ho, S., Kohli, P., Lasnby, J., Leskovec, J., Liu, T., Manrai, A., …, Zitnik, M. 2023. Scientific discovery in the age of artificial intelligence. Nature, 620: 47–60. Google ScholarWickert, C. 2021. Corporate social responsibility research in the Journal of Management Studies: A shift from a business‐centric to a society‐centric focus. Journal of Management Studies, 58: E1–E17. Google ScholarWittgenstein, L. 1953. Philosophical investigations. Chichester, U.K.: Wiley-Blackwell. Google ScholarFiguresReferencesRelatedDetails Vol. 66, No. 6 Permissions Metrics in the past 12 months History Published online 19 December 2023 Published in print 1 December 2023 Information© Academy of Management JournalDownload PDF
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
gengwanlei发布了新的文献求助30
刚刚
老神在在完成签到,获得积分10
刚刚
情怀应助专一的青槐采纳,获得10
1秒前
4秒前
烟雨完成签到,获得积分10
4秒前
4秒前
zorro3574发布了新的文献求助10
9秒前
11秒前
脑洞疼应助小郭采纳,获得10
11秒前
orixero应助大方的醉香采纳,获得10
12秒前
汤绮菱发布了新的文献求助10
12秒前
小白完成签到 ,获得积分10
15秒前
中科院化学完成签到,获得积分10
16秒前
bean发布了新的文献求助10
18秒前
温乘云完成签到,获得积分10
18秒前
19秒前
少一点丶天分完成签到,获得积分10
19秒前
嘿嘿你猜完成签到 ,获得积分10
20秒前
21秒前
小郭发布了新的文献求助10
24秒前
Zp完成签到,获得积分10
24秒前
峻萱完成签到 ,获得积分10
25秒前
25秒前
博大精森完成签到,获得积分10
25秒前
wanci应助bean采纳,获得10
25秒前
naive完成签到 ,获得积分10
27秒前
30秒前
伽俽完成签到,获得积分10
34秒前
SciGPT应助隐形的小土豆采纳,获得10
34秒前
宋坤完成签到,获得积分10
34秒前
小周小周完成签到,获得积分10
34秒前
BW完成签到 ,获得积分10
35秒前
夏日浅笑发布了新的文献求助10
35秒前
37秒前
田様应助xuleiman采纳,获得10
37秒前
Ameng完成签到,获得积分10
41秒前
lsy发布了新的文献求助10
44秒前
44秒前
格非完成签到,获得积分10
45秒前
Yzz完成签到,获得积分10
46秒前
高分求助中
请在求助之前详细阅读求助说明!!!! 20000
Sphäroguß als Werkstoff für Behälter zur Beförderung, Zwischen- und Endlagerung radioaktiver Stoffe - Untersuchung zu alternativen Eignungsnachweisen: Zusammenfassender Abschlußbericht 1500
One Man Talking: Selected Essays of Shao Xunmei, 1929–1939 1000
Yuwu Song, Biographical Dictionary of the People's Republic of China 700
[Lambert-Eaton syndrome without calcium channel autoantibodies] 520
The Three Stars Each: The Astrolabes and Related Texts 500
A radiographic standard of reference for the growing knee 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 有机化学 工程类 生物化学 纳米技术 物理 内科学 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 电极 光电子学 量子力学
热门帖子
关注 科研通微信公众号,转发送积分 2469134
求助须知:如何正确求助?哪些是违规求助? 2136292
关于积分的说明 5443194
捐赠科研通 1860892
什么是DOI,文献DOI怎么找? 925496
版权声明 562694
科研通“疑难数据库(出版商)”最低求助积分说明 495095