Skip to content

Nashorn performance? Just use default settings and very few engines

tldr; Use only one script engine in your Nashorn app and don’t try parameters for persistent code cache or optimistic typing: No use really.

Reading Pieroxy’s article on the comparison of JS engines for Java I missed those other parameters (-pcc, -ot…) for Nashorn. So here are the results:

Scales are seconds over iterations (200).

Optimistic types kills startup time


All options arrive at the same times in the end

Optimistic type info speeds up optimization in the mid-game – but beware that it slows first runs considerably, with or without nashorn.typeInfo.maxFiles cache option. Persistent code cache (-pcc, the blue line), doesn’t really help.

System.setProperty("nashorn.typeInfo.maxFiles", "20000");


The hidden iceberg when evaluating the warmup is that it uses a lot of cpu cores. Here are the CPU times:


Here a benefit of the optimistic type cache is visible (cpu ot maxFiles) – but only in the mid-game and only for a moment.


I also tried to find benefits of optimistic type cache and persistent code cache when using multiple engines, but couldn’t find any. A new engine still needs to warmup, see four cases here (with/without optimistic typing and creating the second engine before using the first or after having used the first for 30 iterations – there’s never a big benefit to the new engine):



Comparison of 1.8u51 and 1.8u60

u60 seems to get less benefit from the type info cache during mid-game, but otherwise shows pretty similar results.

u60_real_optimization u60_cpu_optimization


Optimization takes it’s sweet time and the real hero is Node (see Pieroxy’s article), which is amazing right from the first iteration. For Java it’s Nashorn now, but only partially for performance reasons, more on that in another post.

Test info

Nashorn created a persistent code cache of 4.3 MB in the work dir of the test and 1 MB of type info cache in ~/Library/Caches/ .

Test setup: 2.4 GHz dual core (HT) 2011 Macbook with SSD, JDK 1.8u51 and 1.8u60 .

Code: Possibly later.

Nashorn performance review data spreadsheet (with many more runs than in here).

Update on Dec 29th 2015:
Improved the images on the second engine warmup problems.


List of Cassandra cli validation types

Listing UTF8Type and friends in the source:;a=tree;f=src/java/org/apache/cassandra/db/marshal

Cassandra CQL is still evolving. I cannot use it because of bugs like . tldr: Cannot write from non-CQL Astyanax or Hector to CF defined in CQL with data in non-default validators. So inserting a date will fail because the default validator is UTF8Type.

I think that’s ok since CLI-Definitions give you all CQL 3 benefits – aside from being SQL-lookalike.

Tomcat on OpenVZ Vserver/PVS and Ubuntu


Works with an older kernel, doesn’t work with 2.6.32. Need to set cpu affinity on the openvz instance.


I got non-starting Tomcats and sometimes errors like this on openVZ vservers:

# A fatal error has been detected by the Java Runtime Environment:
# Internal Error (nmethod.cpp:2175), pid=2165, tid=3066727232
# guarantee(nm->_lock_count >= 0) failed: unmatched nmethod lock/unlock
# JRE version: 6.0_33-b03
# Java VM: Java HotSpot(TM) Client VM (20.8-b03 mixed mode linux-x86 )
# An error report file with more information is saved as:
# /usr/local/apache-tomcat-7.0.28/hs_err_pid2165.log
# If you would like to submit a bug report, please visit:

There are other notes on problems like this on the internet. Happens with Oracle and OpenJDK. Funny thing is it’s working for me on a 2.6.18!

Edit: I cannot test this but setting cpu affinity seems to solve the problem:


Btw: I had to upgrade shorewall on ubuntu 10.04 – shorewall version 4.4.6 does not switch of mangling even if it detects that the running kernel doesn’t have the module.

MANGLE_ENABLED=No is a good idea for OpenVZ servers.

Troublesome Java Deployment with Chef – found reference on the mailing list

After a week of trying to get Java deployment with chef working I found two important posts:

Announcement of new application cookbook

Before that: Announcement of work on the application cookbook

For me Java war deployment isn’t working. Didn’t read much about it on the net, so wanting to leave a trace.

You can find the Java Quick Start on Opscode’s website – but you’re told that that is very much deprecated.

Opscode doesn’t leave a clear picture on java support. I am writing my own jwebapp deployment cookbook currently.

Check for attribute in Chef recipe: Not an easy task

The doc says n.attributes?(“hostname”), but that failed me and took some time to understand. I had to check for nilness, too:

if ! (n.attributes?(“hostname”) and n[‘hostname’] != nil)“Not adding node: #{n} – no hostname. Not bootstrapped correctly?”)
else“Found hostname attr for #{n}”)

Why I favor Cassandra

Just stumbled on this nice article on cassandra column access by Aaron Morton:
This is reason enough for noting once and for all why I like Cassandra.
  • Scales linearly (with RandomPartitioner – you need to build your own indexes)
  • No single point of failure – especially also no Master/Slave setup and ops troubles
  • Allows sorted buckets, big sorted buckets – good for building indexes. (give me newest 10 messages, range scan…) Why create a dep on something like elasticsearch or lucene if you can have it in your db?

To contrast with others:

  • Mongo: 
    • Single point of failure: Yes, Master can loose writes. Default setup doesn’t have a commit log. Commit log will kill perf
    • Sorting: Yes
    • Scales: If shard key is chosen wisely and still you have to build your own indexes sooner or later too. Problem with locking – some long running Queries with interleaved writes can kill performance; even lead to a failover. 2.2 improving, but still not there yet.
  • Redis: Has sorted buckets, but not clear how performant inserts or reads in the middle of a sorted list are. Master/Slave replication only.
  • RIAK: No support for sorted data. Makes maintaining a time-index of objects for example a potential performance/scale killer. No single point of failure. Good scaling for data without sorting requirements
  • CouchDB: Nice for append-only data and storing for batch processing. No single point of failure. Not a good idea for data that is updated frequently. Unsure about sorting. Non-update stopped my investigation.
  • (HBase: Very complex. Wants to do everything out of the box and still had some stability issues when I checked.)

Cassandra is currently having a hard time – CQL is not ready yet (v3 needed for recommended model of composite column names and libraries missing) and therefore users and especially new adopters have a hard time choosing a solution.

But I think Cassandra has the most interesting feature set and fit for heavily distributed workloads – a sound value proposition. It forces you to really make the switch to distributed concepts, just like couch. Makes you take difficult decisions and do away with non-scalable and non-fault-tolerant practises early, when it’s still cheap.

Moved from Blogger due to better commenting

I started this blog on blogger: – but since answering comments on my own posts repeatedly failed I came to WordPress.

Maybe this discussion on Google Groups is right and Blogger wants you to enable third party cookies. I don’t like that.