It is a great pleasure to announce the availability of the final release or version 6.0 of Substance look-and-feel (code-named Sonoma). The release notes for version 6.0 contain the detailed information on the contents of this release which include the following:
- Multi-state animated transitions
- New look for text based components (text fields, combo boxes, spinners, date pickers)
- Custom component states
- Support for drop location
Animations in Substance 6.0 are powered by the Trident animation library. You will need to add the matching Trident jar to your classpath. Substance 6.0 is using version 1.2 of Trident which can be downloaded from the main Trident download area or from the Substance 6.0 download area.
In addition to deprecated APIs that have been removed in version 6.0 (see the release notes for version 5.3), application code that uses the following Substance APIs will need to be revisited:
- All painter APIs now operate on a single color scheme. Application code that passed two different color schemes will now need to call the matching APIs twice, and use the relevant composites on the graphics context.
- Configuring the animation settings is now done with the
org.pushingpixels.lafwidget.animation.AnimationConfigurationManager
APIs. In addition, application that want to control the resolution of the animation pulses should consult the Trident documentation on this topic.
Click on the button below to launch a signed WebStart application that shows the available Substance features.

The following sub-projects are also available:
You are more than welcome to take Substance 6.0 for a ride. Sample screenshots of Substance 6.0 in action:




Programming user interfaces has many challenges. Fetching the data from remote service, populating the UI controls, tracking user input against the predefined set of validation rules, persisting the data back to the server, handling transitions between different application screens, supporting offline mode, handling error cases in the communication layer. All of these are just part of our daily programming tasks. While all of these relate directly to what the end user sees on the screen, there is a small subset that greatly affects the overall user experience.
The main two items in this subset are pixel perfection and application responsiveness. It is important to remember that the end user does not care about all of the wonderful underlying layers that take up a larger part of our day. As bad as analogies usually are, i would compare it to the vast majority of the drivers. I personally do not care how many people have worked on the specific model, how intricate are the internals of the engine and how many technical challenges had to be overcome during the development and testing. All i care about is that when i turn the key, the engine starts, when i turn the wheel / press the brake / turn on the AC, it responds correctly and immediately, and that it feels good to be sitting behind the wheel or in the passenger seat.
It is thus rather unfortunate that a lot of user interfaces are not implemented with these two main goals in mind. It would seem that programmers would rather enjoy letting the implementation complexity leaking into the UI layer – if it was hard to develop, it should at least look like that on the screen. There are multiple factors contributing to this. The farther you are from the target audience, the less you are able to judge the usability of the application, especially when you are immersed in its internals for long periods of time. When you operate at the level of data objects (with perhaps direct mapping to the optimized backend data storage), the users don’t. They see your application as a means to accomplish the specific flow that they have in mind. If you fail to crystallize the flows during the design stage, your users will see your application as unintuitive, time wasting and counter productive.
And even if you get the flows right – at least for your target audience – there is the issue of responsiveness. Imagine the following scenario. You’re in the kitchen, and want to heat that last slice of pizza from yesterday’s party. You put it in the microwave, press the button – and the entire kitchen freezes for one whole minute. You can look at the fridge, but you cannot open it. You remember that you pressed the microwave button, but the tray is not spinning and it does not make any noises. You step in your living room, and are not able to get back into the kitchen.
This is what happens when your code does I/O, database or any network operation on the UI thread. It does not really matter how small the operation is or how fast your machine is. If it locks the UI for even a fraction of a second, people will notice. You can say – well, this operation here that I’m doing is persisting the current screen state to the backend, so i cannot really let the user interact with the UI while i’m doing that. Or maybe it’s just not that important and the user can wait. Or maybe it’s just going to add to the development complexity of the UI layer.
Doing this right is hard. First, you need to understand what is right – and that varies depending on the specific scenario. Do you prevent the user from doing anything with the application? Or maybe let him browse some subparts of the screen while you’re running this long operation? Or maybe even let him change information elsewhere in the application? Do you show the progress of that operation? Do you allow canceling the operation? Do you allow creating dependencies between executing / scheduled operations?
And after you know what you want to do, the next step is actually implementing and testing it. At this point the word “multi-threading” will be your friend and nemesis. I cannot say that we are at a stage where doing multi-threading in UI is easy (although i certainly am not an expert in all the modern UI toolkits and libraries). It is certainly easier than it was a few years ago, but it’s still a mess. The words of Graham Hamilton from five years ago are still true today:
I believe you can program successfully with multi-threaded GUI toolkits if the toolkit is very carefully designed; if the toolkit exposes its locking methodology in gory detail; if you are very smart, very careful, and have a global understanding of the whole structure of the toolkit. If you get one of these things slightly wrong, things will mostly work, but you will get occasional hangs (due to deadlocks) or glitches (due to races). This multithreaded approach works best for people who have been intimately involved in the design of the toolkit.
Unfortunately I don’t think this set of characteristics scale to widespread commercial use. What you tend to end up with is normal smart programmers building apps that don’t quite work reliably for reasons that are not at all obvious. So the authors get very disgruntled and frustrated and use bad words on the poor innocent toolkit. (Like me when I first started using AWT. Sorry!)
I believe that any help we can get in writing correct multi-threaded code that deals with UI is welcome. This is why i continue enforcing the Swing threading rules in Substance. It is by far the biggest source of complaints ever since it was introduced about a year and a half ago in version 5.0. The original blog entry on the subject implied – rather unfortunately – that i wanted to make my job easier and not handle bugs that originate from UI threading violations. Allow me to clarify my position on the subject and repost my reply from one of Substance forum postings:
I do not intend to provide such API (disabling the threading checks) in the core Substance library. Once such an API exists, people will have little to no incentive to make their code more robust and compliant with the toolkit threading rules.
If the code you control violates the threading rules – and you *know* it – you should fix it. Does not matter if you’re using Substance or not.
If the code you do not control violates the threading rules – either put pressure on the respective developers to change it or stop using it.
It may be painful in the short term. I may lose potential users because of this. It may cause internal forks of the code base. I am aware of these issues. In my personal view, all of them are dwarfed by what is right in the long term interests of both Substance itself and Swing applications in general.
The followup posting by Adam Armistead provides a deeper look into why this matters – and i thank Adam for allowing me to repost it here:
I would just like to say I strongly support Kirill in this and I am very glad to see he is sticking to his guns on this. I feel that it is too easy to violate threading rules in UI code and that I run across entirely too much code that does. I feel that if someone doesn’t strictly enforce these threading rules then there is not enough pressure on developers to fix the problem. If there was enough pressure I wouldn’t see so damn much of this.
As for it causing problems due to 3rd party dependencies having bad UI code, there are tons of solutions for this. I have personally put pressure on developers to fix problems, submitted patches to open source projects to fix code myself, extended classes, forked codebases, used bytecode manipulation, proxy classes, and Spring and AspectJ method injection, method replacement, as well as adding before/after/around advice to methods. I have yet to encounter a situation where a little work or ingenuity on my part has not been able to overcome someone else’s crappy code. In the worst cases I have written my own libraries that don’t suck, but in most cases I didn’t have to go to this extreme.
I sincerely believe that having substance crash my applications has made MY applications as well as those I interact with better. I have seen people in comment sections in blog posts and on forums advise using Substance in order to assist developers in finding UI threading violations. I have fixed open source code and seen others do the same because Substance throws this exception. I can also say, I know may coders that if they had the choice to just log it, they would do so and just say, “aww, I’m busy, I’ll fix it later.” and likely never get around to it…
They’re always busy, so when one thing gets done there’s always half a dozen more that pop up. If it’s not crashing their code its not critical to fix. Besides, its too easy to violate these rules. I’m a Swing developer and I know the rules, and sometimes when I’m hammering out features I slip up and do things incorrectly. Personally, I am glad Substance catches it so I can fix it now.
What are the chances that Swing will start enforcing threading rules in the core? Zero. Between focusing all the client-side resources on JavaFX and sticking to binary compatibility (even when any reasonably sized Swing application needs code changes when it is ported between different JDKs), there is no chance. However, as SWT and Android show, a toolkit / library that aggressively enforces its own threading rules from the very beginning is not such a bad thing. And who knows, your users may even thank you for it.
I am excited today to announce the availability of the release candidate for version 1.1 of Trident animation library (code-named Bogeyman). Most of the new functionality in this version was driven by the user feedback, and includes the following:
In addition to the bundled simple applications, Trident has two blueprint projects. These projects show how to use Trident to drive complex animation scenarios in Internet-enabled rich applications that show graphic information on music albums. Project Onyx is the Swing implementation (see detailed walkthroughs), and Project Granite is the SWT implementation (see detailed walkthroughs).

If you have Java 7 installed on your machine, click the button below to launch the WebStart version of Project Onyx:

You are more than welcome to take Trident 1.1RC for a ride and report any problems in the project mailing lists, forums or issue tracker. The final release is scheduled for October 12. Only bugs will be fixed until that date.
This has been on my to-do list for quite a long time, and over the last couple of months i have finally started to add unit tests to the Flamingo component suite. Before long, i have found myself stuck deep in a quagmire of nested runnables, complex interactions with the Event Dispatch Thread (EDT) and code that is hard to read, extend and maintain. And even then the tests behaved unpredictably, with some failing randomly – indicating incorrect interactions with the EDT.
At that moment i have realized that the true potential of open source is collaboration and reuse of existing libraries – unless you’re feeling that you can do better, of course :) The final tipping point came in July when Alex has published a blog entry on how FEST Swing library can facilitate writing clean and maintainable interactions with UI components – and this was it. Since then i have been adding FEST-driven unit tests to test different aspects of Flamingo components, including model manipulation, layout, and interaction with mouse and keyboard. Here, i’m going to show a small subset of these tests – illustrating the relevant parts of FEST Swing.
Before starting to delve into the project documentation, start with the blog entry on writing EDT-safe UI tests. It’s short, it’s clean and it’s quite illuminating. Then, head over to the main project page, read a little bit about the library and download the latest distribution (i’m using the latest 1.2a3 version). Once you’re done, add the following jars to your classpath:
- fest-assert-1.1.jar
- fest-reflect-1.1.jar
- fest-swing-1.2a3.jar
- fest-swing-junit-1.2a3.jar
- fest-util-1.1.jar
- junit-4.3.1.jar
Now let’s take a look at how a unit test class looks like. It starts by extending the base FEST class:
public class ActionCommandButtonTestCase extends FestSwingJUnitTestCase {
The biggest advantage of using this base class is that it automatically installs a custom repaint manager that tracks EDT violations – keeping both your unit tests and your main code clean from them. Next, it’s time to create the main window and a command button for the testing:
JFrame buttonFrame;
int count;
JCommandButton button;
@Override
@Before
public void onSetUp() {
URL resource = ActionCommandButtonTestCase.class.getClassLoader()
.getResource("utest/common/edit-paste.svg");
Assertions.assertThat(resource).isNotNull();
final ResizableIcon icon = SvgBatikResizableIcon.getSvgIcon(resource,
new Dimension(32, 32));
Pause.pause(new Condition("Waiting to load the SVG icon") {
@Override
public boolean test() {
return !((AsynchronousLoading) icon).isLoading();
}
});
GuiActionRunner.execute(new GuiTask() {
@Override
protected void executeInEDT() throws Throwable {
buttonFrame = new JFrame();
buttonFrame.setLayout(new FlowLayout());
button = new JCommandButton("test", icon);
button.setDisplayState(CommandButtonDisplayState.BIG);
buttonFrame.add(button);
buttonFrame.setSize(300, 200);
buttonFrame.setLocationRelativeTo(null);
buttonFrame.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);
buttonFrame.setVisible(true);
count = 0;
button.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
count++;
}
});
}
});
GuiActionRunner.execute(new GuiTask() {
@Override
protected void executeInEDT() throws Throwable {
Point locOnScreen = buttonFrame.getLocationOnScreen();
locOnScreen.move(10, 20);
robot().moveMouse(locOnScreen);
}
});
}
While the documentation describes two ways of locating components for testing – fixtures and finders – i found it simpler to just store references to the relevant components, especially for test windows that have a very small number of controls.
The onSetUp
method has the following main stages:
- Create an SVG-based icon and wait for it to load. Here,
Pause.pause
is used to wait for the specific Condition
– the name of the test()
method is quite unfortunate, and IMO should be renamed to better reflect its purpose.
- Then,
GuiActionRunner.execute(GuiTask)
is called to create the main window and the command button. This is by far my favorite method in FEST Swing – it runs the specified code on EDT and waits for it to finish. And while closures could have made the code even more readable, even in its present form it has enormous potential to simplify UI-related tests.
- Finally, the same method is used to move the mouse away from the button – for subsequent tests of mouse interaction.
Once we have the setup method in place, time for the a very simple unit test:
@Test
public void sanityCheck() {
String buttonText = GuiActionRunner.execute(new GuiQuery<String>() {
@Override
protected String executeInEDT() throws Throwable {
return button.getText();
}
});
Assertions.assertThat(buttonText).isEqualTo("test");
}
Here, my second best-favorite feature – GuiActionRunner.execute(GuiQuery)
– is used. It provides an EDT-safe way to query the current state of the specific UI component.
At this point it is important to note that UI testing can only be done to a certain degree. While the test above checks that the getText()
returns the right value, it does not check that the visual representation of the button on the screen actually display this text (or any text for that matter). Testing the correctness of the painting routines is not simple. Even a straightforward approach of using an offline collection of “expected” screenshots will not work under look-and-feels that use the font settings of the current desktop – which varies not only across operating systems, but also across different DPI settings. As such, at the present moment my tests are focusing on checking the model, layouts and mouse / keyboard interactions.
Here are three tests to check that the associated action listener is invoked when the command button is activated with mouse, keyboard or API:
@Test
public void activateButtonWithMouse() {
robot().click(button);
robot().waitForIdle();
Assertions.assertThat(count).isEqualTo(1);
}
@Test
public void activateButtonWithSpace() {
robot().moveMouse(button);
robot().pressAndReleaseKeys(KeyEvent.VK_SPACE);
robot().waitForIdle();
Assertions.assertThat(count).isEqualTo(1);
}
@Test
public void activateButtonWithAPI() {
GuiActionRunner.execute(new GuiTask() {
@Override
protected void executeInEDT() throws Throwable {
button.doActionClick();
}
});
robot().waitForIdle();
Assertions.assertThat(count).isEqualTo(1);
}
Here i’m using different APIs of the Robot
class to simulate the user interaction with keyboard and mouse, as well as EDT-safe invocation of the AbstractCommandButton.doActionClick()
API.
Having these building blocks in place allows you to create more complex scenarios, testing various flows through your model classes. Here is a test case that checks that pressing the mouse, moving it away from the button and then releasing it does not activate the registered action listener:
@Test
public void pressButtonAndMoveAwayBeforeRelease() {
robot().pressMouse(button, AWT.centerOf(button));
robot().waitForIdle();
robot().moveMouse(button, new Point(-10, 10));
robot().waitForIdle();
robot().releaseMouseButtons();
robot().waitForIdle();
// no action listener should have been invoked
Assertions.assertThat(count).isEqualTo(0);
}
Finally, a more complex scenario tests the API that activates the registered action listener on mouse press – as opposed to mouse release:
@Test
public void fireActionOnPress() {
GuiActionRunner.execute(new GuiTask() {
@Override
protected void executeInEDT() throws Throwable {
button.getActionModel().setFireActionOnPress(false);
}
});
Assertions.assertThat(GuiActionRunner.execute(new GuiQuery() {
@Override
protected Boolean executeInEDT() throws Throwable {
return button.getActionModel().isFireActionOnPress();
}
})).isFalse();
// press mouse over the button
robot().pressMouse(button, AWT.centerOf(button));
robot().waitForIdle();
// no action listener should have been invoked
Assertions.assertThat(count).isEqualTo(0);
// release mouse
robot().releaseMouseButtons();
robot().waitForIdle();
// action listener should have been invoked
Assertions.assertThat(count).isEqualTo(1);
// mark the button to fire the action listeners on mouse press
GuiActionRunner.execute(new GuiTask() {
@Override
protected void executeInEDT() throws Throwable {
button.getActionModel().setFireActionOnPress(true);
}
});
Assertions.assertThat(GuiActionRunner.execute(new GuiQuery() {
@Override
protected Boolean executeInEDT() throws Throwable {
return button.getActionModel().isFireActionOnPress();
}
})).isTrue();
// press mouse over the button
robot().pressMouse(button, AWT.centerOf(button));
robot().waitForIdle();
// action listener should have been invoked
Assertions.assertThat(count).isEqualTo(2);
// release mouse
robot().releaseMouseButtons();
robot().waitForIdle();
// no action listener should have been invoked
Assertions.assertThat(count).isEqualTo(2);
}
This entry showed a very small subset of what FEST Swing can do. I am still exploring this library as i continue adding more test cases to cover the command buttons – towards unit tests that will cover the ribbon component.