Samstag, 3. Juni 2017

Hello CoreMedia CMS 9

In any project based on the CoreMedia CMS where sites have to be created from scratch, at one point we ran into the need of the minimal content set for that site. That content area should be unrelated and unconnected to any other data and content in the system - so be really self-contained. It should make clear, which elements and settings are really needed from the wealth of options for that very project or site within the instance of the software.

Too much Content

For really some time now, CoreMedia provides a load of demo-site contents for different scenarios of the usage of their products. It is quite easy to set up a system that really shows something.

From my point of view, taking one of these demo sites and ripping out the unnecessary parts was not helpful, since you might end up - especially for new releases - with too much data that might be needed. At least a feeling of unknown dependencies sometimes remained.

URL found through Google search for "coremedia chef corp"

A small content snippet with as few elements as ever possible still seemed to be missing. And this still holds true for the latest release CoreMedia CMS-9 (and LiveContext 3 as well) as shown from a small personal survey.

Hello World

Again involved in a project based on CoreMedia products, I am now using the latest - and greatest - version of the CMS product - CoreMedia CMS 9, and asked some old friends to help me.

After showing the system and demos one of the first questions: "Ok, nice, where is the minimal Hello World site?".

From that feedback without any prior CoreMedia-related background, I assume that developers expect that part of a software - any software - to be there.

The first step now was, to ask people for their starting point content sets or Hello World sites. The result was then reworked as a CoreMedia CMS 9 Hello World site as depicted below:

It could be imported into the repository and instantly be used as a Hello World site, - and with its static content only as a Hello World site.

Configurable Content

This is a nice missing piece - I think - for CoreMedia CMS 9, but it is not really the last step for a starting point for a project. For every re-use that Hello World content has to be reworked to use the needed
  • Local
  • Path
  • Title
  • URL Segments
and the like. So there still is some work to do. But this kind of work is even more schematic and boring than the setup of the minimally needed elements.
This ended in a rewrite script and a content workspace which I now would like to make available to any interested developer in the CoreMedia-sphere.

Find the resulting code at Bitbucket, GitLab, and Github.
Enjoy. Feedback is welcome.

Dienstag, 17. Januar 2017

CoreMedia Build Docker

CoreMedia releases 7 and 8 had one very nice advantage: You didn't have to install anything else than stock Java and Maven to be able to build the workspaces.

Where we come from

This is one of the reasons, why it was possible to use standard services like travis-ci and Gitlab CI with their current container based approach to build these rather large and complex workspaces. Official docker containers for maven just did the job.

GitLab CI example to build a CoreMedia 7 workspace:
image: maven:3-jdk-7

  - build

  stage: build
  - mvn install -s workspace-configuration/maven-settings.xml -B

  artifacts:    paths:
    - packages/*/*/target/*.rpm
    expire_in: 3 days

  - mvnrepo/

Where we go to

With the upcoming new CoreMedia platform for CMS and LiveContext this again changes a bit. And that bit is really that tiny, that I didn't want to lose the possibility to use the container based CI tools and go back to Jenkins (which is, what my customers provide me with).
That being said a basic docker file is prepared very quickly to include
Java 8
Maven 3
The missing part is the command line tool for Sencha ExtJS, the framework the CoreMedia studio heavily relies on. This tool comes with a graphical installer and cannot be installed script based (as far as I can see).
Since I already installed that tool locally, I decided to write a little preparation script, to convert the installation to a copy-able artifact, changing an absolute symbolic link to a relative one.

Prepare a copy-able sencha Cmd copy:
S=`which sencha`
DIR=`dirname $S`
(cd build ; cp -rdp $DIR .)

SD=`(cd build/Cmd ; find -type d -name "6\.[0-9]*")|sed -e 's/^.\///g'`
(cd build/Cmd ; rm sencha-$SD ; ln -s $SD/sencha sencha-$SD)

What's left

Left out intentionally is the authentication part for maven with the CoreMedia repositories and the integration with GitLab CI, what I will be using, since these two are not really generic.

How to get it

As usual you find the sources for this small code piece on GitHub to clone, fork, and use it. And also as usual feedback is highly welcome.

Montag, 7. November 2016

Dinistiq Version 0.6

The minimalistic Dependency Injection library for Java - Dinistiq - has been published in version 0.6 some time ago.
This version still doesn't change the main outline of the project: It deals with a single - singleton - scope with a few extensions in a very small footprint library.
The library does not introduce any custom tags, but fully relies on a subset of the JSR330 definitions.
Called the qualifier release, besides dealing with qualifiers this release updates the dependencies, dinistiq relies on.

Qualifiers Introduced

Despite the fact that Dinistiq passes the TCK for JSR330, it lacked handling of qualifiers following the standard. This has been fixed with this release at the account of performance of the container.
None of my projects uses this feature, so the implementation fully relies on the TCK run integrated into the CI builds and the others dinistiq available. These tests of course have again been extended.


Starting from release 0.4 dinistiq is available from the JCenter repository at

Latest snapshots are provided at 

There are already some 0.7-SNAPSHOTs available, which reduce the number of dependencies needed by Dinistiq.


There are no special migration steps necessary for existing application.

Sonntag, 8. Mai 2016

Java 8 on OpenShift

For some time now - supported by many articles I read on the web - I thought it was a fact that

OpenShift does not support Java 8

While the OpenShift Platform as a Service environment - even some time after Java 7 left the public support track - doesn't really push you to use Java 8, it well supports it in the sense that it's simply there - but just not the default.
Of course this has been mentioned before, but there are too many recent post around stating the opposite.

OpenShift has Java 8 in Place

When you stop reading articles and just dig around in the system, you can very quickly come to the point where you see, that OpenShift has numerous Java Development Kits installed and that the default is set by Linux standard means: /etc/alternatives.
So you don't have to download it, invest space and update it regularly to the current version. This is handled by OpenShift.

Wide Choice of JDKs

The /usr/lib/jvm folder has everything you might need - JDK-wise - and a simple scriptlet in your deployment will bring you to the needed JDK:

export JAVA_HOME=`find /usr/lib/jvm -name "$JAVA_VERSION"`
if [ -z JAVA_HOME ] ; then

So, no need to stay away from Java 8 because of something related to OpenShift (anymore).

Prepared Quickstart for Java 8, Gradle, and Tomcat

For the users of Gradle and Tomcat I updated this in my quickstart on based on the DIY cartridge.

Freitag, 6. Mai 2016

Tangram Release 1.1

The 1.1 release is a consolidation release removing outdated stuff leaving
room for latest versions of the used libraries and modules and providing some
polishing in the existing feature set.

Beyond the App Engine

The sources and release notes can be found on, the binaries again reside on JCenter. This release includes some notable overdue dependency updates like
  • Servlet 3.1 API now included
  • JSP 2.1 now included
  • DataNucleus updated to 4.1
  • dinistiq updated to 0.5
To allow this, it leaves out the Google App Engine support.

Gretty Plugin

Since not only the Google App Engine has a defect with old releases of Java
Web APIs but also Gradle has a obsolete Jetty plugin with an old version of
Jetty, all examples have been migrated to Gretty which is now the preferred container integration and even application packaging option.

Morphia Support

Latest Tangram applications always used Mongo DB as the storage backend with either JPA or JDO as the API. If you don't need the API abstraction or want an even leaner storage integration (before that DataNucleus JDO and EBean had the smallest footprint) you now can start using the "Mongo DB only Mapping" Morphia.

Markdown Support

Up to now every property using char[] for getters and setters was handled as structured text. It was edited as an HTML fragment (p or div) using the CKEditor.
Starting with this release, every property using org.tangram.content.Markdown for the getters and setters is handled through the CodeMirror editor as Markdown syntax. It is transformed to HTML in rendering.
Also templates can now be in Markdown syntax whenever appropriate while also here the output is transformed to HTML before passing the templates to the other layers of Tangram.

Taking Care of the Modification Date and Time

If you need to track the modification time of an object through the Tangram editor module or the ftp service, you can now do so and the standard storage backed classes of Tangram now do so.

File Restart Cache

The restart caches used for faster restarting of the application with data which should not be changing between sessions of the application now only have to be cleared manually on error conditions but not on changes of the application any more.

Exporting and Importing

The export and import features of Tangram are now strictly symmetrical: Full content export and import  - more or less independent of the storage flavour in use - and a code exporter and importer for the sources stored in the repository.

Fat JARs

Additionally it is now possible to leave out the "war" dependencies since
resources like templates can be placed in the respective JARs.

Extended testing

Tangram now for the first time has a decent test coverage. While there still
is room for more in depth testing of many functions, we now can put more trust
into the single snapshot builds. This was quite important since most Tangram
applications not residing on the Google App Engine already migrated to the 1.1

Sonntag, 22. November 2015

Tangram Release 1.0

Since Tangram reached more than the set of features originally intended back in 2009, it was time to call the current state a 1.0 release.


The name Tangram points out that the main objective was to create web applications from a fixed set of modules and options which can form nearly any shape you would need. You can start quickly with a limited set of functionality and let the application grow. Fixed set in this case only means that you will need e.g. a model implementation but still have a choice, which one you would want to use. Also the glueing together with a Dependency Injections solution provides several options.
Another intention was to provide reasonable defaults for nearly every aspect where this is possible. You don't need to copy tons of files which do things technically necessary, which you didn't think of at that time. You only set up the things you know that you need them, and all the other features stay quiet. At least they should do something reasonable without getting in your way.


Tangram is in production use since 2011 by Provocon, Ponton, and others.

Dynamic Extendibility

For many extensions you will not even need a deployment but a change in the data repository forming small and stable cores where the application as whole can be modified to changing requirements quickly, easily and securely.
Apart from the obvious presentation stuff this includes Groovy codes to create and parse URL, extend the business logic, and even the model layer with new model classes.

Core Features

We provide object oriented templating for objects from JPA, JDO, EBean, and CoreMedia repositories. The core system emphasises dynamic web programming through CSS, JavaScript, Velocity Templates, and Groovy Codes in the repository together with basic CDN support, code minification, and caching support.

URLs can be formatted in nice, SEO friendly schemes and be easily mapped to actions to provide the result views for users.
The implementation of authentication and authorization now uses the pac4j set of libraries to provide a seamless and easy integration of plain user lists in files or the repository, OAuth providers, OpenID providers, Google App Engine user service and others for the web application and the base system itself.
This authentication solution is used grant access to the system itself but also to provide support for protected content areas in your applications. You only have to focus on the question, which content needs to be protected by a access granting scheme. Logins can be a generic login page or integrated in you application design.

Generic Editor

Except for CoreMedia we provide a generic web editor which is now responsive (there is now separate mobile editor anymore) which can be loaded from the respective cloud locations of the used components (CKEditor, CodeMirror). Also it can be extended with our dynamic web programming facilities.

Glueing Stuff Together

The mentioned default set of configurations and the option to customize this to your needs is achieved by Dependency Injection libraries. Tangram supports the usage of the Spring Framework, Google Guice, and dinistiq. It should be possible to add other solutions as well provided that they offer a decent set of functionality (which is not always the case for smaller DI libs.) including e.g. optional values, overriding of configurations, and deep generics introspection.


To illustrate the usage and provide a nice starting point, a set of example applications is provided in sync with the releases.
A documentation wiki is now starting.

Where the 1.0 Release Ends

One real limitation is the older set of JSP/Servlet APIs which need to stay in place to support the Google App Engine. So this is the final release to support the App Engine to be able to move ahead to newer APIs for several areas. This will not be achieved in the 1.0 branch.

Mittwoch, 28. Oktober 2015

Gradle Plugin for EBean, JPA, and JDO Enhancing along with Minification and Overlaying

During the development of the Tangram framework project a set of build related things went into a plugin for the Gradle tool.
The functionality of this Tangram Gradle Plugin is only in very small parts directly related to Tangram. It is more or less a general purpose plugin for applications needing
  • Byte-code transformation of model classes for
    JDO, JPA, and Ebean ORM layers
  • Minification of CSS and JavaScript codes to be placed in WAR artifacts
  • Support underlying of WAR files into others (similar to overlays)
The good news for today is, that with the latest version 1.0.5 the plugin can be used from the central Gradle plugins repository with the - not very surprising - id tangram.gradle.plugin. So some of the usage notes have to be aligned with this situation.


Just a few lines have to be added to your Gradle build script to use the plugin:

plugins {
  id "tangram.gradle.plugin" version "1.0.5"

All of the following steps described here take place without any additional configuration.

Prepare EBean, JPA, and JDO Model Classes

When used with Java projects - and when some data model classes are discovered, - the plugin tries to prepare them for use the respective Object Relational Mapper (ORM). The ORM APIs supported are
These APIs in turn are supported by a number of implementations. The supported implementations are
These OR-Mapper API implementations require (DataNucleus and EBean) or recommend (the others) to apply byte-code transformations called "Enhancing" or "Weaving" to the class files. The compiled code is extended with some database access support to implement the active record pattern more or less seamless.
The API and implementation library in use is discovered from the names of the elements of the class path of the project. If one of the mentioned libraries is found, the corresponding byte-code transformation is applied to the appropriate step (post compile or pre jar creation).

martin@nelson:~/proj/tangram/sites/naturinspiriert$ gradle clean build
Performing DataNucleus JPA byte code transformation.
ENHANCED (Persistable) : org.naturinspiriert.RootGroup
ENHANCED (Persistable) : org.naturinspiriert.Topic
ENHANCED (Persistable) : org.naturinspiriert.AbstractGroup
ENHANCED (Persistable) : org.naturinspiriert.ImageData
ENHANCED (Persistable) : org.naturinspiriert.Article
ENHANCED (Persistable) : org.naturinspiriert.Linkable
ENHANCED (Persistable) : org.naturinspiriert.Group
DataNucleus Enhancer completed with success for 7 classes. Timings : input=60 ms, enhance=54 ms, total=114 ms. Consult the log for full details
7 classes enhanced.

:compileTestJava UP-TO-DATE
:processTestResources UP-TO-DATE
:testClasses UP-TO-DATE
:test UP-TO-DATE
:check UP-TO-DATE


Total time: 4.53 secs

(Highlighted message to indicate use of the DataNucleus Enhancer)

The byte-code transformations directly use the transformer of the respective library in use except for the OpenJPA case, where the ant task of the enhancer is integrated. Some of the transformers issue some logging.

Switching Off

Additionally it is possible to switch of the byte code transformation for JPA annotated classes by adding to your build file

// build.gradle: 

in case this might be necessary e.g. to only use the other parts of the plugin. Also EclipseLink, Hibernate, and OpenJPA support the use of plain Java classes without the byte-code transformation.

JavaScript and CSS Minification

When used in conjunction with the war plugin, CSS and JavaScript resources are automatically minified.
The plugin checks for resources with a filename extension .css for Cascading Style Sheets and .js for JavaScript. Matching resources are minified using the YUI Compressor.
It is not possible to minify resource included from archive files but only file resources local to your project. So contents from archives - while being included in the resulting web archive - cannot be minified. (We would expect WAR files to contain minified resources like WAR files generated using this plugin do.)

Web Application Underlying

The plugin introduces a configuration named webapp for modules using the war plugin. Dependencies given for this configuration are extracted into the resulting war artifact.

// build.gradle:
dependencies {
  webapp "tangram:tangram-core:$tangram_version:war@war"
  webapp "tangram:tangram-editor:$tangram_version:war@war"

  compile "tangram:tangram-core:$tangram_version"
  // Persistence API JPA
  compile "tangram:tangram-jpa:$tangram_version:nucleus"
  compile "org.datanucleus:datanucleus-api-jpa:$versions.datanucleus"
  compile "org.datanucleus:datanucleus-core:$versions.datanucleus"
  compile "$versions.jdo_api"
  compile "$versions.persistence_api"
  runtime "org.datanucleus:datanucleus-mongodb:$versions.datanucleus"

  compile "tangram:tangram-editor:$tangram_version"
  runtime "tangram:tangram-dinistiq:$tangram_version"
  runtime "org.slf4j:slf4j-log4j12:$versions.slf4j"

  providedCompile "$versions.servlet_api"
  providedCompile "$versions.jsp_api"

This process is not really described well if called overlay so I call in underlying, since your web application's directory in fact is the overlay so the other archives referenced and included must be an underlying.
If your WAR relies on the contents of another pre-packaged or incomplete WAR, the contents of the latter will be copied into your resulting web application while you can override any file in this archive from your local web application contents directory.

Version List

The plugin introduces a version object which collects some version strings for a number of libraries. This ensures that any project using the plugin can use these libraries with recent versions and version changes are applied in sync. The use of this part is optional and you have to explicitly use the versions in your build file since this cannot be applied transparently.
Some random examples:

dependencies {
  compile "org.pac4j:pac4j-openid:$versions.pac4j"

  compile ("org.apache.openjpa:openjpa:$versions.openjpa") {
    exclude group: 'asm'

  compile "org.eclipse.persistence:org.eclipse.persistence.jpa:$versions.eclipselink"
  compile "$versions.persistence_api"

  compile "org.hibernate:hibernate-core:$versions.hibernate"
  compile "org.datanucleus:datanucleus-api-jpa:$versions.datanucleus"
  compile "org.dataucleus:datanucleus-core:$versions.datanucleus"
  compile "$versions.jdo_api"
  runtime "org.slf4j:slf4j-log4j12:$versions.slf4j"

  testCompile "org.testng:testng:$versions.testng"

  // your container will have this for you
  providedCompile "$versions.servlet_api"
  providedCompile "$versions.jsp_api"

Manual Mode

Of course it is still possible to call the methods performing the different tasks directly like described in the 0.9 plugin blog post. This should only be necessary if you e.g. want to enhance files in the unit test section of your code, which is considered a very rare case.