I know, it’s a cliche phrase, but I think everything is indeed changed. Well, that’s maybe exaggerated, but in the field I’m active in, large language models (some call them artificial intelligence) are currently having a huge impact.
In my experience, they are heavily bending time used for the different phases of software engineering. Programming tasks which used to take hours or days (or even more) are now done in minutes. This allows to shift more time to other activities or allows to do more. But in some cases, it also requires more time to be spent on other phases, otherwise there will be trouble: Especially properly defined requirements and implementation validation are becoming extremely important.
One specific example concerns reverse engineering of the NewtonOS, something various people have been working on over the last twenty years and more. It is a slow process, requires a lot of time and dedication, and sometimes quite frustrating. But with LLMs, this changes - a lot. A year ago, I was still thinking we would need LLMs trained specifically on the available NewtonOS artifacts (DDK headers, ROM disassembly, ARM reference manuals), but things have changed over the last year.
I have been trying a more general approach to fill in the blanks, and it’s quite a ride. The output is still mediocre, but the more structure and data will be available, the better it will be. The approach works actually in all cases where the solution outline is mostly known:
Do not tell the LLM to solve the problem, but think how you would solve the problem, and instruct the LLM to do so.
For the reverse engineering task, this means to first put tools together which help, use the tools, evaluate the process, improve the tools, repeat. Here, this means to automate the extraction of methods, the derivation of vtables, the derivation of class layouts, and grunt work like resolving cross references. What is really interesting is to let the LLM reflect on the process and improve the tools along the way.