It gives me great pleasure to announce the seventh major release of Radiance. Let’s get to what’s been fixed, and what’s been added. First, I’m going to use emojis to mark different parts of it like this:
marks an incompatible API / binary change
marks new features
marks bug fixes and general improvements
Dependencies for core libraries
- Gradle from 7.1 to 7.2
- Kotlin from 1.5.10 to 1.5.31
- Kotlin coroutines from 1.5.0 to 1.5.2
Neon
Substance
Flamingo

Add reference to the ribbon as a parameter to all OnShowContextualMenuListener
methods

Align icon theming across all Flamingo components
Fix layout of command buttons in TILE
layout under RTL
Fix visuals of horizontal command button strips under RTL
Fix layout of anchored command buttons under RTL
Fix layout of command button popup content under RTL
Fix issues with updating ribbon gallery content
Photon
Remove SvgBatikIcon
and SvgBatikNeonIcon
Move Photon to be under tools
General
As with the earlier release 4.0.0, this release has mostly been focused on stabilizing and improving the overall API surface of the various Radiance modules. As always, I’d love for you to take this Radiance release for a spin. Click here to get the instructions on how to add Radiance to your builds. And don’t forget that all of the modules require Java 9 to build and run.
And now for the next big thing or two.
I will take the next two weeks to fix any bugs or regressions that are reported on the 4.5.0 release. On the week of October 18th, all Radiance modules are going to undergo a major refactoring. While Radiance unified all of the Swing projects that I’ve been working on since around 2004, this unification was rather superficial. It made it easier to have inter-module dependencies. It made it easier to write documentation. It made it easier to schedule coordinated releases. But it didn’t make it easier to see what Radiance is.
In the last year or so I kept on asking myself the same questions over and over again.
If I started with these libraries today, will they still be using these disjointed codenames (Neon, Trident, Substance, Flamingo, not to talk about Torch, Apollo, Zodiac, Meteor, Ember, Plasma, Spyglass, Beacon, etc)? For somebody who wants to deep dive into the implementation details, are there places that are internally inconsistent? For an app developer who wants to get the most out of these libraries, does Radiance provide an externally approachable and consistent set of APIs?
The first step I’m taking to answer at least some of these questions is moving away from the codenames, and renaming everything based on the functional boundaries. And by everything I mean everything – modules, classes, methods, fields, variables, etc. It’s going to be a huge breaking change. But it’s something that I feel is way overdue for a project of this complexity. More specifically:
org.pushingpixels.trident
-> org.pushingpixels.radiance.animation
org.pushingpixels.neon
-> org.pushingpixels.radiance.common
org.pushingpixels.substance
-> org.pushingpixels.radiance.theming
org.pushingpixels.flamingo
-> org.pushingpixels.radiance.component
org.pushingpixels.substance.extras
-> org.pushingpixels.radiance.theming.extras
org.pushingpixels.ember
-> org.pushingpixels.radiance.theming.ktx
org.pushingpixels.meteor
-> org.pushingpixels.radiance.swing.ktx
org.pushingpixels.plasma
-> org.pushingpixels.radiance.component.ktx
org.pushingpixels.torch
-> org.pushingpixels.radiance.animation.ktx
org.pushingpixels.tools.apollo
-> org.pushingpixels.radiance.tools.schemeeditor
org.pushingpixels.tools.beacon
-> org.pushingpixels.radiance.tools.themingdebugger
org.pushingpixels.tools.hyperion
-> org.pushingpixels.radiance.tools.shapereditor
org.pushingpixels.tools.ignite
-> org.pushingpixels.radiance.tools.svgtranscoder.gradle
org.pushingpixels.tools.lightbeam
-> org.pushingpixels.radiance.tools.lafbenchmark
org.pushingpixels.tools.photon
-> org.pushingpixels.radiance.tools.svgtranscoder
org.pushingpixels.tools.zodiac
-> org.pushingpixels.radiance.tools.screenshot
Classes that used codenames, such as SubstanceLookAndFeel
, TridentConfig
etc will be renamed to follow the functionality of the matching API sub-surface. For example:
SubstanceCortex
-> RadianceLafCortex
TridentCortex
-> RadianceAnimationCortex
SubstanceButtonUI
-> RadianceButtonUI
This first round of refactoring will be the next Radiance release. It will not move classes between modules. It will not add or remove modules, classes or methods. Migrating from 4.5 to 5.0 will require a lot of import refactoring, and some amount of refactoring – wherever you are calling Radiance APIs in your code. Once 5.0 is out, the next release will have follow-up refactorings for cleaning up places that have not aged well.
What’s the other big thing that I alluded to earlier? I want to provide support for consistent application of custom visuals across all supported Swing components. In Substance, this is done with painters. Due to a complicated nature of some of these painters, pretty much since the very beginning Substance has been using cached off-screen bitmaps to maintain a good performance footprint. The very first time a component needs to be rendered in a certain visual state, Substance renders those visuals to an offscreen bitmap. Next time, if we already have a cached bitmap that matches the current state, we reuse it by rendering that bitmap on the screen.
While this model has served Substance (and, by extension, the Flamingo components) rather well, it has started to show significant cracks over the last few years. You can see more information in this bug tracker on the underlying issues, but the gist of it is rather simple – screens with fractional DPI settings (125% or 150%, for example) do not play well with rendering offscreen bitmaps. The end result is that rendering a hairline (one-pixel wide) element can be fuzzy, distorted, or not there at all on the screen.
It is going to be a long road, and it might mean that it might take longer than usual to get the next Radiance release out the door. My current goal is that by the end of it, Radiance does not use any offscreen bitmaps for any of its rendering, and that everything is rendered directly onto the passed graphics object. Lightbeam will certainly come in handy all through that process. Wait, excuse me, Lightbeam will be no more in a couple weeks. It’s going to be Radiance LAF Benchmark instead.
In the past year or so I’ve been working on a new project. Aurora is a set of libraries for building Compose Desktop apps, taking most of the building blocks from Radiance. I don’t have a firm date yet for when the first release of Aurora will be available, but in the meanwhile I want to talk about something I’ve been playing with over the last few weeks.
Skia is a library that serves as the graphics engine for Chrome, Android, Flutter, Firefox and many other popular platforms. It has also been chosen by Jetbrains as the graphics engine for Compose Desktop. One of the more interesting parts of Skia is SkSL – Skia’s shading language – that allows writing fast and powerful fragment shaders. While shaders are usually associated with rendering complex scenes in video games and CGI effects, in this post I’m going to show how I’m using Skia shaders to render textured backgrounds for desktop apps.
First, let’s start with a few screenshots:

Here we see the top part of a sample demo frame under five different Aurora skins (from top to bottom, Autumn, Business, Business Blue Steel, Nebula, Nebula Amethyst). Autumn features a flat color fill, while other four have a horizontal gradient (darker at the edges, lighter in the middle) overlaid with an curved arc along the top edge. If you look closer, all five also feature something else – a muted texture that spans the whole colored area.
Let’s take a look at another screenshot:

Top row shows a Perlin noise texture, one in greyscale and one in orange. Bottom row shows a brushed metal texture, one in greyscale and one in orange.
Let’s take a look at how to create these textures with Skia shaders in Compose Desktop.
First, we start with Shader.makeFractalNoise
that wraps SkPerlinNoiseShader::MakeFractalNoise
:
// Fractal noise shader
val noiseShader = Shader.makeFractalNoise(
baseFrequencyX = baseFrequency,
baseFrequencyY = baseFrequency,
numOctaves = 1,
seed = 0.0f,
tiles = emptyArray()
)
Next, we have a custom duotone SkSL shader that computes luma (brightness) of each pixel, and uses that luma to map the original color to a point between two given colors (light and dark):
// Duotone shader
val duotoneDesc = """
uniform shader shaderInput;
uniform vec4 colorLight;
uniform vec4 colorDark;
uniform float alpha;
half4 main(vec2 fragcoord) {
vec4 inputColor = shaderInput.eval(fragcoord);
float luma = dot(inputColor.rgb, vec3(0.299, 0.587, 0.114));
vec4 duotone = mix(colorLight, colorDark, luma);
return vec4(duotone.r * alpha, duotone.g * alpha, duotone.b * alpha, alpha);
}
"""
This shader gets four inputs. The first is another shader (which will be the fractal noise that we’ve created earlier). The next two are two colors, and the last one is alpha (for applying partial translucency).
Now we create a byte buffer to pass our colors and alpha to this shader:
val duotoneDataBuffer = ByteBuffer.allocate(36).order(ByteOrder.LITTLE_ENDIAN)
// RGBA colorLight
duotoneDataBuffer.putFloat(0, colorLight.red)
duotoneDataBuffer.putFloat(4, colorLight.green)
duotoneDataBuffer.putFloat(8, colorLight.blue)
duotoneDataBuffer.putFloat(12, colorLight.alpha)
// RGBA colorDark
duotoneDataBuffer.putFloat(16, colorDark.red)
duotoneDataBuffer.putFloat(20, colorDark.green)
duotoneDataBuffer.putFloat(24, colorDark.blue)
duotoneDataBuffer.putFloat(28, colorDark.alpha)
// Alpha
duotoneDataBuffer.putFloat(32, alpha)
And create our duotone shader with RuntimeEffect.makeForShader
(a wrapper for SkRuntimeEffect::MakeForShader
) and RuntimeEffect.makeShader
(a wrapper for SkRuntimeEffect::makeShader
):
val duotoneEffect = RuntimeEffect.makeForShader(duotoneDesc)
val duotoneShader = duotoneEffect.makeShader(
uniforms = Data.makeFromBytes(duotoneDataBuffer.array()),
children = arrayOf(noiseShader),
localMatrix = null,
isOpaque = false
)
With this shader, we have two options to fill the background of a Compose element. The first one is to wrap Skia’s shader in Compose’s ShaderBrush
and use drawBehind
modifier:
val brush = ShaderBrush(duotoneShader)
Box(modifier = Modifier.fillMaxSize().drawBehind {
drawRect(
brush = brush, topLeft = Offset(100f, 65f), size = Size(400f, 400f)
)
})
The second option is to create a local Painter
object, use DrawScope.drawIntoCanvas
block in the overriden DrawScope.onDraw
, get the native canvas with Canvas.nativeCanvas
and call drawPaint
on the native (Skia) canvas directly with the Skia shader we created:
val shaderPaint = Paint()
shaderPaint.setShader(duotoneShader)
Box(modifier = Modifier.fillMaxSize().paint(painter = object : Painter() {
override val intrinsicSize: Size
get() = Size.Unspecified
override fun DrawScope.onDraw() {
this.drawIntoCanvas {
val nativeCanvas = it.nativeCanvas
nativeCanvas.translate(100f, 65f)
nativeCanvas.clipRect(Rect.makeWH(400f, 400f))
nativeCanvas.drawPaint(shaderPaint)
}
}
}))
What about the brushed metal texture? In Aurora it is generated by applying modulated sine / cosine waves on top of the Perlin noise shader. The relevant snippet is:
// Brushed metal shader
val brushedMetalDesc = """
uniform shader shaderInput;
half4 main(vec2 fragcoord) {
vec4 inputColor = shaderInput.eval(vec2(0, fragcoord.y));
// Compute the luma at the first pixel in this row
float luma = dot(inputColor.rgb, vec3(0.299, 0.587, 0.114));
// Apply modulation to stretch and shift the texture for the brushed metal look
float modulated = abs(cos((0.004 + 0.02 * luma) * (fragcoord.x + 200) + 0.26 * luma)
* sin((0.06 - 0.25 * luma) * (fragcoord.x + 85) + 0.75 * luma));
// Map 0.0-1.0 range to inverse 0.15-0.3
float modulated2 = 0.3 - modulated / 6.5;
half4 result = half4(modulated2, modulated2, modulated2, 1.0);
return result;
}
"""
val brushedMetalEffect = RuntimeEffect.makeForShader(brushedMetalDesc)
val brushedMetalShader = brushedMetalEffect.makeShader(
uniforms = null,
children = arrayOf(noiseShader),
localMatrix = null,
isOpaque = false
)
And then passing the blur shader as the input to the duotone shader:
val duotoneEffect = RuntimeEffect.makeForShader(duotoneDesc)
val duotoneShader = duotoneEffect.makeShader(
uniforms = Data.makeFromBytes(duotoneDataBuffer.array()),
children = arrayOf(brushedMetalShader),
localMatrix = null,
isOpaque = false
)
The full pipeline for generating these two Aurora textured shaders is here, and the rendering of textures is done here.
What if we want our shaders to be dynamic? First let’s see a couple of videos:
The full code for these two demos can be found here and here.
The core setup is the same – use Runtime.makeForShader
to compile the SkSL shader snippet, pass parameters with RuntimeEffect.makeShader
, and then use either ShaderBrush
+ drawBehind
or Painter
+ DrawScope.drawIntoCanvas
+ Canvas.nativeCanvas
+ Canvas.drawPaint
. The additional setup involved is around dynamically changing one or more shader attributes based on time (and maybe other parameters) and using built-in Compose reactive flow to update the pixels in real time.
First, we set up our variables:
val runtimeEffect = RuntimeEffect.makeForShader(sksl)
val shaderPaint = remember { Paint() }
val byteBuffer = remember { ByteBuffer.allocate(4).order(ByteOrder.LITTLE_ENDIAN) }
var timeUniform by remember { mutableStateOf(0.0f) }
var previousNanos by remember { mutableStateOf(0L) }
Then we update our shader with the time-based parameter:
val timeBits = byteBuffer.clear().putFloat(timeUniform).array()
val shader = runtimeEffect.makeShader(
uniforms = Data.makeFromBytes(timeBits),
children = null,
localMatrix = null,
isOpaque = false
)
shaderPaint.setShader(shader)
Then we have our draw logic
val brush = ShaderBrush(shader)
Box(modifier = Modifier.fillMaxSize().drawBehind {
drawRect(
brush = brush, topLeft = Offset(100f, 65f), size = Size(400f, 400f)
)
})
And finally, a Compose effect that syncs our updates with the clock and updates the time-based parameter:
LaunchedEffect(null) {
while (true) {
withFrameNanos { frameTimeNanos ->
val nanosPassed = frameTimeNanos - previousNanos
val delta = nanosPassed / 100000000f
if (previousNanos > 0.0f) {
timeUniform -= delta
}
previousNanos = frameTimeNanos
}
}
}
Now, on every clock frame we update the timeUniform
variable, and then pass that newly updated value into the shader. Compose detects that a variable used in our top-level composable has changed, recomposes it and redraws the content – essentially asking our shader to redraw the relevant area based on the new value.
Stay tuned for more news on Aurora as it is getting closer to its first official release!
Notes:
- Multiple texture reads are expensive, and you might want to force such paths to draw the texture to an
SkSurface
and read its pixels from an SkImage
.
- If your shader does not need to create an exact, pixel-perfect replica of the target visuals, consider sacrificing some of the finer visual details for performance. For example, a large horizontal blur that reads 20 pixels on each “side” as part of the convolution (41 reads for every pixel) can be replaced by double or triple invocation of a smaller convolution matrix, or downscaling the original image, applying a smaller blur and upscaling the result.
- Performance is important as your shader (or shader chain) runs on every pixel. It can be a high-resolution display (lots of pixels to process), a low-end GPU, a CPU-bound pipeline (no GPU), or any combination thereof.