aiopinionpiece_text=` # AI in STEM & Security ###### By Daniel Moreno ###### AI · 13 min read · Mar 8, 2026 --- ![image alt ><](https://easy-networksgh.com/wp-content/uploads/2024/08/website-performance-optimization.png) *Source: https://easy-networksgh.com/product/performance-optimization/* Greetings and permutations everyone. In the past month or two, I've been asked my opinion on AI with regards to STEM a few times now. As such, I decided to write a quick piece on my opinion for future reference. No doubt this will age like milk. However, I still think that it is worth it for myself to put my thoughts to paper (or at least the digital equivalent). **Caveat**: My current job involves training enterprise-scale LLMs. Over the past few years, I have helped make certain LLMs better at programming, data science, and CTFs. This definitely colors my perspective. ### What is actually happening day to day in software teams right now? At the companies I've worked for, I've seen companies mandating use of one AI or another. One company had biweekly, semi-mandatory meetings with each meeting being led by a different team. The goal was to teach people different ways to use AIs. Also, I saw a company working to create a local variant of a major LLM trained on their proprietary language. That language is very old, poorly documented, and exhibits strange syntax. Even when the AI made mistakes, I found it rather helpful as a way to supplement the documentation or to point me towards other files in the codebase that I could reference. I've heard rumors (no personal experience here thankfully hence rumors) that a lot of the layoffs are caused by companies hoping that AI will multiply the effectiveness of the workforce (like a particularly effective piece of capital) and thereby reduce the required workforce size. In terms of lines of code written and the time required to review code (especially at first glance), AI has substantially sped things up. However, AI-written code exhibits more logic, correctness, quality, maintainability, security, and performance issues (40%+ more depending on the report). Even in the papers showing increased productivity, note how it is measured and which sample groups see the benefits (junior vs senior engineers). Also, AI tends to struggle with concise or well-organized code (it reads like the work of an intern, at best). To summarize, we seem to be seeing more output, slower delivery, and lower quality. Another issue with AIs is that they have been trained to be confirmation machines. If you ask the LLM to do something illogical, you don't get push-back. This is a big problem if you're actually misunderstanding something in the code. Also, the AI isn't going to acknowledge that it doesn't know the answer or doesn't understand it. If you don't already have a better understanding of the topic than the LLM, you will encounter issues. Ultimately, I like to use AI-written code as a literal frame which I build upon and modify. Like a hammer, AI is a tool that can be useful, but not every problem is a nail. Just treat the AI like an intern. They can be useful, speed up your work, and potentially deal with some tedious tasks. However, you shouldn't assume that their work is perfect and flawless. If you tell it to look for bugs, you'll get a bunch of shallow bugs with a high false positive rate. There is a reason that we're seeing "vibe code cleanup" as a job. Anyway, here's a few papers or videos people may find interesting. [Our new report: AI code creates 1.7x more problems](https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report) [4x Velocity, 10x Vulnerabilities: AI Coding Assistants Are Shipping More Risks](https://apiiro.com/blog/4x-velocity-10x-vulnerabilities-ai-coding-assistants-are-shipping-more-risks/) [AI Code Review Bottleneck Kills 40% of Productivity](https://byteiota.com/ai-code-review-bottleneck-kills-40-of-productivity/) [Study Finds No DevOps Productivity Gains from Generative AI](https://devops.com/study-finds-no-devops-productivity-gains-from-generative-ai/) [The Effects of Generative AI on High-Skilled Work: Evidence from Three Field Experiments with Software Developers](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4945566) [Coding on Copilot: 2023 Data Suggests Downward Pressure on Code Quality](https://www.gitclear.com/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality) [Can AI Pass Freshman CS?](https://www.youtube.com/watch?v=56HJQm5nb0U) ### Which skills are becoming more important, and which are becoming less important? * Most enterprise-scale AIs are capable of writing basic code and even good-looking advanced code. They are also decent at summarizing code and writing documentation. Ultimately, AIs have mostly redoubled the importance of various skills. * You can't find the subtle flaws unless you understand code well. * Technical writing is even more important than before. For good or ill, people tend to dismiss text they believe to be written by AI so being able to write well will attract attention (even more attention considering how rare this skill seems to be to begin with). * Critical thinking is rather important. Take everything from an AI with a grain of salt. I don't think that I've ever seen a perfect result from it. * Knowing how to create scalable, secure, high-quality systems is more important than ever. AI can write code and even help with design decisions. However, that's the role of coders and programmers, not software engineers. Software engineers work of massive systems, ensuring security and quality. AI can't do that yet and may never be able to do that. ### Over the next 2, 5, and 10 years, what changes in STEM do you think are most likely? Right now, I think everyone will agree that we are seeing growing pains for an untested technology paired with a fad. Things are going to get a lot worse before they get better, though I think they will. Ultimately, the mere existence of a piece of technology automatically imparts a responsibility for proper use. We don't have to use it, but I know human nature too well to think that it won't be. We'll just have to see whether it actually gets used in a responsible manner. The biggest danger of AI is people misunderstanding its capabilities, whether that be underestimating or overestimating it. Security is especially a nightmare. Prompt injection is a whole thing. Also, what happens if you give AI access to a codebase with a patented (or trade secret) algorithm or to a database containing legally-protected data? Many AIs are programmed to add what they see to the training data (well, one of the training datasets; whole thing there) which means that you are leaking that data. Getting into my personal opinion, I think that AI will eventually plateau (let's say in the next 5 years). A lot of the extremely optimistic side of the discussion reminds me of the discussion from the 90s about the wonders of the Internet and how it would fix all societal woes. Unsurprisingly, the Internet brought about a lot of change, both positive and negative. Anyway, technology tends to follow a sigmoid curve, and I think that we are approaching at least a local plateau. DNNs (the type of AI currently in major use) are mostly only capable of interpolation rather than extrapolation, which means that they can handle stuff within their training data set but will make stuff up if you move beyond what they "know". We are already running into issues regarding electricity, compute power, silicon and rare metals to manufacture the parts, and the like. [Leopold](https://www.lawfaremedia.org/article/ai-timelines-and-national-security--the-obstacles-to-agi-by-2027) [Aschenbrenner](https://situational-awareness.ai/) predicted an "intelligence explosion" in 2027 due to certain scaling laws, but new papers are coming out that refute his predictions (like [The wall confronting large language models](https://arxiv.org/abs/2507.19703)). Basically, the new papers say that to reduce an AI's error rate by 1 order of magnitude would require 10^20 times more compute power. If we can get quantum computing off the ground and commercially-viable, then that would temporarily mitigate the compute power problem. However, there is a minority of physicists, computer scientists, and mathematicians (Gil Kalai, Robert Alicki, Leonid Levin, Stephen Wolfram, Gerard 't Hooft, Tim Palmer, Roger Penrose, anyone supporting spontaneous localization, etc.) that think quantum computers will never be viable which would definitely cause some issues for AI. Out of the various AI types, LLMs in particular are incapable of logical reasoning or generalization (see [On the Biology of a Large Language Model](https://transformer-circuits.pub/2025/attribution-graphs/biology.html), [Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens](https://arxiv.org/abs/2508.01191), and [Adversarial Policies Beat Superhuman Go AIs](https://arxiv.org/abs/2211.00241)). That doesn't mean non-LLM AIs won't solve the problem; it's just that LLMs are so popular right now that few people are studying non-LLM AIs. Institutional inertia is a thing, though the fad component minimized that at the higher levels of management. In addition, I think that we're moving into the "trough of disillusionment" when it comes to [Gartner's hype curve](https://en.wikipedia.org/wiki/Gartner_hype_cycle) so you're going to see less money thrown at all of these problems ([MIT NANDA report](https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf), [Goldman Sachs on AI bubble](https://www.goldmansachs.com/pdfs/insights/goldman-sachs-research/ai-in-a-bubble/report.pdf)). Even if we create highly-advanced and capable AIs, you will run into issues with scalability and copying it. You could have an Artificial Super Intelligence, but the government has to issue ration cards to use it (a fun book concept). I wouldn't be discouraged or disincentivized from deeply looking into these topics. LLMs are inherently unreliable, at least in part due to their nondeterministic behavior. So long as AIs exist, we will need a workforce of people that understand the topics well enough to fact-check them, if nothing else. You'll need to grow your skillset over time and constantly keep up (AI certainly won't be the last "game-changer" that pops up). However, cybersecurity isn't going to become outmoded short of the concept of "electronics" disappearing or everyone becoming perfect human beings incapable of accident or malice. In the short term, perhaps you'd be better off learning how agents are constructed and setting up solvers as there is definitely an industry for that, but the old jobs won't be disappearing. At most, they'll change permanently to incorporate AI. However, they may just change temporarily and revert once the hype dies down. There is already an interesting trend in software developer jobs. We have a new job popping all over the place: vibe code cleanup. For at least 5 years, I think that will likely remain a thing. This is especially true for companies or industries that went too far into AI and now have to backpedal. Perhaps that will be due to legal reasons. Perhaps they will need to do so because their industry does not allow for any margin of error. After all, if code needs to be 100% correct, AIs are not a viable choice right now. They do much better when they only need to be 80-90% correct. Swapping to an analogy, if AI advances enough, they may very well render mathematicians obsolete in the same way that computers rendered human calculators obsolete. At that point, those willing and able to stay up-to-date will simply shift into a new position. There used to be similar concerns that compilers and high-level programming languages would render programmers jobless because they wouldn't be needed or because anyone could do it for themselves. Those who fell behind the times were rendered jobless, and we probably lost some amazing talents because of it. However, the field didn't disappear; it just changed. Remember, a lot of older CS professors graduated with math degrees because CS degrees did not exist at the time. As a final note, I want to bring up a point that someone else mentioned in response to my answers. There are plenty of software engineers with dubious skills or that coast by on outdated knowledge. These people will struggle more than any others. A lot of good people will certainly get caught up in the layoffs, but such people will experience the worst of it. ### Can AI discover things top security researchers can't? At a conference called \[un\]prompted, Nicholas Carlini, a research scientist at Anthropic, said "Current LLMs are better vulnerability researchers than I am". This has triggered some discussion (at least in my circles). The obvious caveat here is that Carlini is not a security researcher. This is a really interesting claim, and I tend to believe it. He's probably a very competent researcher and programmer, but his specialization is not security research. As such, his claim cannot automatically be applied to security research as a whole. For the next part, I'm going to swap over to physics and mathematics briefly because there's been a lot of discussion and therefore research in the use of AI in those fields. This may provide insight into less-researched areas like security research. In addition, creating novel papers in physics is the ideal usecase for an AI because the data sets and output are among the most rigorous, mechanic and literate forms of the English language where nothing is intentionally hidden, simply unknown. As such, I think that it provides insights into less-ideal circumstances like cybersecurity. In physics, (to paraphrase Bojan Tunguz whose a data scientist who studies this kind of thing) the bottleneck on scientific discovery hasn't been intelligence or paper-writing. Rather, we have too much paper-writing and not enough confirmation. There are decades-old papers still awaiting proper peer review, validation, and replication. This was mentioned back in [2008 by Chris Anderson](https://www.wired.com/2008/06/pb-theory/) (the guy that founded the TED talks). Dutch psychologist Piet Vroon stated decades ago that the psychology literature had become so vast that people were rediscovering things that had been written about already in the 19th century. AI will just make that bottleneck worse. Even if it helps discover lost works (as I'll discuss later), the AIs can't help validate the papers. Peer reviewers are trying that, and we're already seeing the results. Interestingly, AIs can sort of help with the flood of data. If you heard about the time GPT-5 solved 10 Erdos problems, it actually rediscovered 10 solutions in the literature that were unknown (going back to the flood of papers issue). That has happened a lot since the models are really good at finding forgotten details. To quote physicist Steve Hsu, the models capable of understanding theoretical physics require "a lot of domain expertise, careful prompt, and time-consuming analysis of the AI output". A spreading belief is that theoretical physicists will no longer be people that come up with ideas so much as people trying to figure out what the AI is saying (sounds a bit like haruspex and could make a fun book if nothing else). There's the argument that if we let machines think for us, we will no longer understand anything. While true, it's also this weird scenario where that does limit our capabilities and is unrealistic from the perspective of human nature. I don't have any clear opinions on that so I'll move along. The biggest change AI will likely make the realm of physics is a decline of PhD and postdoc positions since those positions are (to an extent) cheap labor in exchange for getting your name on a paper. ChatGPT is a lot cheaper than a postdoc so it'll decrease the need for them while also making labs less dependent on grant money, ultimately resulting in a flood of technically correct but generally useless papers that no one can review. This is especially bad since the way that grants are currently structured heavily favors productivity over insights. To circle back to cybersecurity, curl is an interesting example. The curl project used to have a bug bounty program but stopped it due to AI. Not because AI failed to contribute anything valuable, but rather the project was overwhelmed with a flood of AI submissions, some of which were valuable and some of which were garbage. ### Will AI ruin CTFs? Right now, I'm seeing a lot of focus on AIs as they relate to CTFs. To begin, be careful using CTFs as the benchmark for how well AIs actually do with cybersecurity. Speaking from personal experience, an AI's performance on a CTF (especially the jeopardy kind) is only sort of reflective of its performance on real-world projects. A jeopardy-style CTF has an intended path and an objectively-correct solution which is completely unlike real security research. A pen tester can spend months looking for a bug that may not exist. A security researcher could pursue a vulnerability for months, find it, and then not realize what they have found. Ultimately, CTFs are just fun little exercises to help you learn in a different way than school projects. As a bare minimum, I think that jeopardy CTFs will abandon the idea of awarding fewer points as more people solve a challenge. They may also lengthen the run time of CTFs. In my experience, AI teams tend to solve a bunch of challenges quickly, rising towards the top of the leaderboard in the early hours. However, given enough time, human players catch up and surpass them. Jeopardy-style CTFs will likely become less common. They've become extremely popular over the past while, but they were never the most realistic type. While they have a lot of infrastructure-related issues and are not ideal for all elements of cybersecurity, Attack-Defense and King of the Hill will probably become a lot more common. After all, it will be really difficult for an AI to solve them. The biggest area where AI could ruin CTFs has to do with motivation. To combat AI, challenges just keep getting harder under the assumption that you are using AI such that they are hard with AI and brutal without it. Players, especially new ones, are unmotivated to actually compete. Due to all of the aforementioned elements, many of the leaderboard positions will be filled by AI-only or AI-assisted players. The new players feel pressured to use AI or are left without confidence. From the other side, AI has left challenge writers unmotivated. They spend an incredible amount of time designing and tuning their challenges. They put in little things for people to discover and enjoy. However, if it is just AIs, why bother? From an infrastructure side, AIs struggle with scope. For example, I've heard that CTFs are experiencing infrastructure issues because AIs are dirbusting web challenges. As I've said a lot throughout this article, CTFs will not disappear; they will simply change. In general, CTFs began with an AD CTF hosted by DEF CON in 1996. To say that CTFs have changed since then would be an understatement, and they will continue to change going forward. Whether we like those changes is a separate matter. ### Conclusion Anyways, apologies for the word soup. Hopefully my ultimate point got across: things are going to change but that doesn't make learning cybersecurity worthless. Some things are going to improve; others will get worse. There is still value in studying programming, cybersecurity, or any other STEM field. If nothing else, AI may be able to do the work of junior personnel, but you can't reach the position of a senior without first studying the field. `