How to open an existing Rails project in IntelliJ IDEA

As described here, the proper way to open an existing Rails site in IDEA is:

  1. File -> New Project
  2. Create project from scratch (Next)
  3. For “Project files location” choose your Rails application directory, or a parent directory
  4. Leave “Create module” checked, select “Ruby Module” as the type”, and make sure the “content root” & “module file location” is set to the root of your Rails application (Next)
  5. Ensure your Ruby SDK is configured (Next)
  6. Ensure “Ruby on Rails” is checked, and fill out the appropriate settings under “Use existing Rails application” (Finish)

I have used this approach successfully in IntelliJ IDEA 11.0.2 and 11.1.3

If you don’t see Ruby as an option for the module type, ensure you have installed the Ruby plugin.

Considerations for bitemporal data in a distributed data store – Part 1

I’m calling this Part 1 of what may be an ongoing theme in this blog: considerations for storing bitemporal in a distributed data store. In my particular case, that data store is Oracle’s Coherence.

Standard three-timestamp or four-timestamp bitemporal data makes use of a Transaction or Transaction Start time to give distinguish entries with partially or fully overlapping Valid Time ranges. In fact, the Transaction Start time should be unique for all entries which refer to a given atomic unit of data in the data store (a particular object, if your datastore records objects, or a particular attribute if that is what it records). In some RDBMS, the uniqueness of Transaction Start is accomplished automatically. In others, it may be easily accomplished by virtue of having a single centralized transaction point and system clock. However, in distributed environment, one must make a little extra effort due to the distributed nature of transactions and the probable differences in system clocks among the members of the system.

The solution I have used is to write a so-called “sequential now” timestamp generator. This is a service provider which behaves very similarly to a distributed ID generator. It transactionally ensures that, when demanded, the client receives a timestamp that represents “now” except in the case that “now” is equal to the previously provided “now”, in which case it increments the previous “now” by the smallest possible increment (defined by the granularity of the system or chronon) and then provides it. Provided that the granularity of your system or chronon is small, ie. on the order of nanoseconds, this is likely to be an acceptable approach. If, on the other hand, your granularity is on the order of seconds, minutes, or hours, increasing transaction rates will cause an undesirable situation where the “sequential now” creeps further and further away from system now or realtime now. In such a situation, another approach is likely to be required.

In Coherence, such a generator can be easily implemented as an EntryProcessor. That is left as an excercise for the reader.

Coherence: using a custom operational override file with “system-property” command line overrides

The situation I am going to describe might be obvious to some people, but apparently it wasn’t to me. I was diagnosing a problem where the tangosol.coherence.clusterport system property specified on the command line (with the -D flag) was being ignored by my coherence-enabled code. The code was not overwriting the system property, however it was specifying values for other system properties that override Coherence settings, among them, it was specifying an override file by setting the tangosol.coherence.override property. Here’s an analysis of the problem:

If you write your own Coherence “operational override” file (specified with the tangosol.coherence.override property), you must re-iterate the system-property attribute on each element whose default you wish to maintain.

For example, in tangosol-coherence.xml, the clusterport command line override is specified in the port tag with: . If you were to override in your custom override file, you must include the system-property if you want to keep using it. In other words, the “preconfigured overrides” they mention throughout the documentation (especially here) are only guaranteed to be present if you use the default configuration files. Otherwise it is up to you.

This makes sense if you look at the provided override files, for example tangosol-coherence-override-dev.xml. In the time-to-live element, for example, it re-iterates the system-property setting that is also present in tangosol-coherence.xml. Unfortunately, I did not notice this, and instead assumed that I did not have to re-iterate the “preconfigured override”. I like this ability to remove the command line override functionality on specific elements. It allows us to make some of our configurations a bit more rock solid so that command line parameters cannot mess it up.

Perforce “Deleted: edits to the file cannot be submitted without re-adding”

I recently could not submit my Perforce changelist because “out of date files must be resolved or reverted.” After being baffled for a few moments (files were up to date, no files needed to be resolved), I also noticed red text in my p4v submit dialog box that said, “Deleted: edits to the file cannot be submitted without re-adding.” A look through my changelist revealed a deleted file that reported its version as #0/2. I reverted the file, and checked in successfully.

Windows 7 repair boot loop

I was diagnosing a friend’s Dell laptop running Windows 7 the other day: every time it would boot up, a Windows repair wizard would come up. It would scan for problems, and find none. A system restore did not help, nor did SFC. I couldn’t even boot into safe mode. The repair wizard would start even after I recovered the OS partition from the Dell recovery image! That’s when I began to suspect that there wasn’t really any problem. I did some more searching online and decided to try bootrec /fixmbr and bootrec /fixboot. Voila, Windows began to boot normally! Now, I wish I had tried that to start with since the system repair report didn’t list any specific problem.

Using JFormattedTextField with Swing Data Binding from tornado.no

Although there are many data binding libraries available for Swing, the one I have been dealing with lately is from http://databinding.tornado.no/

After I added a JFormattedTextField as a bound field, I noticed that it wasn’t obeying the text-to-value and value-to-text conversion that I had specified in my Formatter. Instead, the data binding was trying to populate the model with the text from the field instead of the value. A look at the source code of CoreUIBridgeProvider shows that JTextFieldBridge is used for JFormattedTextFields, and that bridge only gets & sets the text of the field. Its behavior is sufficient for JTextField, but not for proper use of JFormattedTextField.

To fix the issue, I wrote a custom bridge specifically for formatted fields. It simply passes the value object straight through it.

@Override
public void setUIValue(JFormattedTextField component, Object value) throws ConversionException {
    component.setValue(value);
}

@Override
public Object getUIValue(JFormattedTextField component) throws ConversionException {
    return component.getValue();
}

After I added a new entry to my bridge provider to map my new bridge to my custom JFormattedTextField class, it immediately started working as expected.

Letters typed into JTextField appear in reverse order

I was working on a JTextField in a NetBeans Platform application (Java Swing), and started to see some bizarre behavior. The cursor would remain to the left of each letter typed, and as a result the letters would appear backwards. Or rather, the word would be spelled backwards as the letters would be in reverse order. Rendering was also messed up, as most new letters would not be visible unless the cursor had passed through them. Once it occurs, the problem spreads to other text fields.

I hadn’t noticed, but an Exception had actually been thrown, probably messing up the JTextField’s listeners in some way. The easy answer is to fix the exception. If I had noticed it earlier I wouldn’t have been so baffled!

AspectJ with IntelliJ IDEA

So you’re trying to use AspectJ in a Maven project with IntelliJ IDEA. You add the basic dependency and plugin to your POM file, when IDEA gives you this message:

IDEA was unable to find AspectJ compiler JAR among plugin dependencies.
Please check Settings | Compiler | Java Compiler

The solution is to add a dependency on aspectjtools. For example:

<dependency>
    <groupId>org.aspectj</groupId>
    <artifactId>aspectjtools</artifactId>
    <version>1.6.9</version>
</dependency>

When IDEA imports the POM changes it will automatically set the AJC compiler and perform some extra indexing work, and then you’re done!

Hard Drive Cloning Software

If you are trying to choose software to clone your hard drive, unfortunately I have to recommend Seagate/Maxtor’s MaxBlast over the open source Clonezilla. The now-current version of Clonezilla does not allow you to easily specify the size of the partitions on the new hard drive. It can only automatically resize all the partitions proportionately or duplicate them in the same size (which means if your new drive is bigger you have to enlarge your partition later). Of course if you only have one partition, it doesn’t matter.

The other thing is that Clonezilla showed a warning about one of the new partitions not starting on a cylinder boundary. That’s another issue that doesn’t occur with MaxBlast.