Any of these modern choices include features using LLMs to further decompile the decompiled code? Seems like an obvious direction, even just to infer variable names.
Sadly it's not maintained anymore and even the intellijidea-derived decompilers are better nowadays (used to be horrible until a few years ago).
In addition to the limitation to classfiles built for Java8, it sadly has a hard time decompiling new language features even if compiled for a Java8 target. And then there is the well known bug that decompiling full jars in bulk does not get you the same output you see in the UI but orders of magnitude worse... jd was great until it lasted, helped me solve a lot of issues with verdors over the years.
The most annoying thing in intellij (fernflower is it) is that it does not maintain correct line numbers, so when debugging, there is a divergence. Still you need to download sources but not always they are available
I think this is popping up in Hacker News because the concept of decompilers has become a bit more acceptable recently. (strokes beard)Time was, decompilation was said to be Impossible (as my wise friend syke said: most things people say are impossible are just tedious). Then, it just became "something you could only do in a targeted, single-application fashion.)
Somewhere in there, Alan Kaye laughed and handed everyone dynamic code.
These days, with AI in tow, decompilation is becoming the sort of thing that could be in the toolchain, replacing IDA and such. Why debug and examine when you can literally decompile?!
So, maybe, that idea being considered to be newly on the table, someone felt the need to post a counter-point, proving once again that everything old is new again.
Hats off for decomiling Java apps that mostly predate generics and annotations... both of which were added in 5.
>Hats off for decomiling Java apps that mostly predate generics and annotations... both of which were added in 5.
the 1st very famous and good decompiler was written in C. Other than that generics and annotation didn't not make the work easier at all decmopilation wise
I'm not sure you lived the same history I did. Decompiling for intermediate languages has always been a thing. Hell, back in college as an intern I was looking at the assembly of a decompiled C# binary, and back in highschool using intellij's Java decompiler to poke at some game applets to see if there we hacking opportunities. This was back when ruinscape didn't have a paid version
Is there anything especially hard about decompiling (to) Java?
.NET/C# decompilers are widespread and generally work well (there is one built into Visual Studio nowdays, JetBrains have their own, there were a bunch of stand-alone tools too back in the the day).
< disclaimer - I wrote CFR, which is one of the original set of 'modern' java decompilers >
Generic erasure is a giant pain in the rear. C# doesn't do this. You don't actually keep any information about generics in the bytecode, however some of the metadata is present. BUT IT COULD BE FULL OF LIES.
There's also a huge amount of syntactic sugar in later java versions - take for example switch expressions.
Personally, I don't get the sentiment. Yeah, decompiling might not produce the original source code, which is fair. It's possible to generate code using invokeDynamic and what not - still being valid code if a compiler opts to do so.
When decomiling bytecode there has to be a reason for, and a good one. There has to be a goal.
If the code is somewhat humanly understandable that's ok. if it's more readable than just bytecode, that's already an improvement.
Reading bytecode alone is not hard when it comes to reverse engineering.
Java already comes with methods and fields available by design. Having local variable names and line numbers preserved is very common, due to exception stack traces being an excellent debugging tool. Hence debugging info gets to be preserved.
try/finally shares the same issues, albeit less pronounced.
C# doesn't erase all generics; but there's also some type erasure happening: nullable reference types, tuple element names, and the object/dynamic distinction are all not present in .NET bytecode; these are only stored in attributes for public signatures, but are erased for local variable types.
C# also has huge amounts of syntactic sugar: `yield return` and `await` compile into huge state machines; `fixed` statements come with similar problems as "finally" in java (including the possibility of exponential code growth during decompilation).
You're awesome! I had really good experiences with CFR in the mid 2010s.
I used it for game modding and documentation (and caught/reported a few game bugs + vulnerabilities along the way). I'd pull game files from Steam depots with steamkit, decompile with CFR, and run the resulting java through doxygen.
One of the use case of décompilation is bug hunting / vulnerability research. And that’s still one of the use cases where AI isn’t that good because you must be precise.
I’m not saying that won’t change but I still see a bright future for reversing tools, with or without AI sidekicks (like the BN plugin)
I used codex 5.1 yesterday to point at a firmware blob and let it extract and explore it targeting a specific undisclosed vulnerability and it managed (after floundering for a bit) to read the Lua bytecode and identify and exploit the vuln on a device running the firmware.
If anything, vulnerability research should be good target for AI because failure to find an exploit isn't costly (and easily verified) but 1 in N success is very useful.
A great tool for digging into obscure jar and class files. I used it many times to track down very obscure bugs in Java based products. Often you will have a vendor saying that your issue is not real or not reproducible on their end. But with this kind of tool you can peek behind the curtains and figure out how to trigger some condition 100% of the time.
Didn't Oracle drop support for Java 8 like six years ago? I'm sure there are plenty of companies still running it, but even Apple (a relatively conservative company in this regard) updated to Java 11 when I was there in ~2019.
this isn't really the case. a lot of legacy code may still be running the version it was developed against, but java 17+ has a sizable share of the ecosystem now that all of the popular libraries require it. spring for example bumped their baseline to jdk 17 in 2022.
Doesn't really matter if you're using an old Spring version with the old Java version. Spring offers enterprise support for Spring framework 5 which still supports Java 8.
But organizations still using Java 8 will most likely use some kind of Java Enterprise application server with vendor support. IBM will support Websphere with Java 8 until at least 2030 and maybe longer if customers keep paying. I'd guess Oracle has a similar policy.
It used to but Oracle‘s licensing and probably more important security guidelines from the very top linking CVE scores to mandatory updates got things moving on the last years.
More modern choices are JADX (https://github.com/skylot/jadx) or Vineflower (https://github.com/Vineflower/vineflower). If you want a paid, higher-quality option, try JEB (https://www.pnfsoftware.com/).
Any of these modern choices include features using LLMs to further decompile the decompiled code? Seems like an obvious direction, even just to infer variable names.
>Seems like an obvious direction, even just to infer variable names.
when debugging symbols are included (sort of the default) the local variables are already present; LLM would be the last thing I'd consider
Yeah, I mean duh, of course? Why infer when you have the proper names? I don't understand what you're trying to point out here...
Sadly it's not maintained anymore and even the intellijidea-derived decompilers are better nowadays (used to be horrible until a few years ago).
In addition to the limitation to classfiles built for Java8, it sadly has a hard time decompiling new language features even if compiled for a Java8 target. And then there is the well known bug that decompiling full jars in bulk does not get you the same output you see in the UI but orders of magnitude worse... jd was great until it lasted, helped me solve a lot of issues with verdors over the years.
The most annoying thing in intellij (fernflower is it) is that it does not maintain correct line numbers, so when debugging, there is a divergence. Still you need to download sources but not always they are available
I've only seen that with transient dependencies that are instantiated via Reflections
I think this is popping up in Hacker News because the concept of decompilers has become a bit more acceptable recently. (strokes beard)Time was, decompilation was said to be Impossible (as my wise friend syke said: most things people say are impossible are just tedious). Then, it just became "something you could only do in a targeted, single-application fashion.)
Somewhere in there, Alan Kaye laughed and handed everyone dynamic code.
These days, with AI in tow, decompilation is becoming the sort of thing that could be in the toolchain, replacing IDA and such. Why debug and examine when you can literally decompile?!
So, maybe, that idea being considered to be newly on the table, someone felt the need to post a counter-point, proving once again that everything old is new again.
Hats off for decomiling Java apps that mostly predate generics and annotations... both of which were added in 5.
>Hats off for decomiling Java apps that mostly predate generics and annotations... both of which were added in 5.
the 1st very famous and good decompiler was written in C. Other than that generics and annotation didn't not make the work easier at all decmopilation wise
I'm not sure you lived the same history I did. Decompiling for intermediate languages has always been a thing. Hell, back in college as an intern I was looking at the assembly of a decompiled C# binary, and back in highschool using intellij's Java decompiler to poke at some game applets to see if there we hacking opportunities. This was back when ruinscape didn't have a paid version
Is there anything especially hard about decompiling (to) Java?
.NET/C# decompilers are widespread and generally work well (there is one built into Visual Studio nowdays, JetBrains have their own, there were a bunch of stand-alone tools too back in the the day).
< disclaimer - I wrote CFR, which is one of the original set of 'modern' java decompilers >
Generic erasure is a giant pain in the rear. C# doesn't do this. You don't actually keep any information about generics in the bytecode, however some of the metadata is present. BUT IT COULD BE FULL OF LIES.
There's also a huge amount of syntactic sugar in later java versions - take for example switch expressions.
https://www.benf.org/other/cfr/switch_expressions.html
and OH MY GOD FINALLY
https://www.benf.org/other/cfr/finally.html
>Generic erasure is a giant pain in the rear
Personally, I don't get the sentiment. Yeah, decompiling might not produce the original source code, which is fair. It's possible to generate code using invokeDynamic and what not - still being valid code if a compiler opts to do so.
When decomiling bytecode there has to be a reason for, and a good one. There has to be a goal.
If the code is somewhat humanly understandable that's ok. if it's more readable than just bytecode, that's already an improvement.
Reading bytecode alone is not hard when it comes to reverse engineering. Java already comes with methods and fields available by design. Having local variable names and line numbers preserved is very common, due to exception stack traces being an excellent debugging tool. Hence debugging info gets to be preserved.
try/finally shares the same issues, albeit less pronounced.
C# doesn't erase all generics; but there's also some type erasure happening: nullable reference types, tuple element names, and the object/dynamic distinction are all not present in .NET bytecode; these are only stored in attributes for public signatures, but are erased for local variable types.
C# also has huge amounts of syntactic sugar: `yield return` and `await` compile into huge state machines; `fixed` statements come with similar problems as "finally" in java (including the possibility of exponential code growth during decompilation).
You're awesome! I had really good experiences with CFR in the mid 2010s.
I used it for game modding and documentation (and caught/reported a few game bugs + vulnerabilities along the way). I'd pull game files from Steam depots with steamkit, decompile with CFR, and run the resulting java through doxygen.
One of the use case of décompilation is bug hunting / vulnerability research. And that’s still one of the use cases where AI isn’t that good because you must be precise.
I’m not saying that won’t change but I still see a bright future for reversing tools, with or without AI sidekicks (like the BN plugin)
I used codex 5.1 yesterday to point at a firmware blob and let it extract and explore it targeting a specific undisclosed vulnerability and it managed (after floundering for a bit) to read the Lua bytecode and identify and exploit the vuln on a device running the firmware.
If anything, vulnerability research should be good target for AI because failure to find an exploit isn't costly (and easily verified) but 1 in N success is very useful.
Vineflower is probably what you want nowadays
This one haven't been updated for 5 years and do not support any newer java features.
which new feature are not supported?
I wish I could use it to recompile itself
Or you can use https://jar.tools/ - online java decompiler I built some time ago. Runs in your browser
> Runs in your browser
You say it like it's a good thing.
yes, because you don't need to install anything on your machine
A great tool for digging into obscure jar and class files. I used it many times to track down very obscure bugs in Java based products. Often you will have a vendor saying that your issue is not real or not reproducible on their end. But with this kind of tool you can peek behind the curtains and figure out how to trigger some condition 100% of the time.
It had better be really old Java code. This decompiler supports only through Java 8. We're on Java 24 now.
Java 8 is your everyday corporate code ...
Didn't Oracle drop support for Java 8 like six years ago? I'm sure there are plenty of companies still running it, but even Apple (a relatively conservative company in this regard) updated to Java 11 when I was there in ~2019.
> Java SE subscribers will receive JDK 8 updates until at least December 2030
Not for clients with a commercial license, and there are many.
https://github.com/corretto/corretto-8/blob/develop/CHANGELO...
Amazon still supporting Java 8
this isn't really the case. a lot of legacy code may still be running the version it was developed against, but java 17+ has a sizable share of the ecosystem now that all of the popular libraries require it. spring for example bumped their baseline to jdk 17 in 2022.
Doesn't really matter if you're using an old Spring version with the old Java version. Spring offers enterprise support for Spring framework 5 which still supports Java 8.
But organizations still using Java 8 will most likely use some kind of Java Enterprise application server with vendor support. IBM will support Websphere with Java 8 until at least 2030 and maybe longer if customers keep paying. I'd guess Oracle has a similar policy.
It used to but Oracle‘s licensing and probably more important security guidelines from the very top linking CVE scores to mandatory updates got things moving on the last years.
What about Android? Hmm....
Nope, we are on Java 25
next, add a feature that does a pass with an llm that makes local variable names more realistic and adds comments.
What is the use of decompiling, is there any real time use case?