Friday 18 December 2015

Java Random

I helped a colleague recently  determine whether Java Random() was worth using in a situation where the client had to be certain that the numbers were truly random.
Our concern was that while a small sample of random numbers (especially while seeded with the current time ) are fit for most purposes they are not appropriate where due dillegence needs to be performed.
The reason being most random number generators are only pseudo random.
So we did some research and asked some of our experts about the suitability of the Java Random function.  Things to note are defined below:
  • Most of the documents on the subject are quite old and therefore there may be improvements, but I couldn’t find anything which said it had materially changed.
  • It is worth noting that we are looking at Pseudo Random Number Generators and not truly random numbers. For the use cases discussed I don’t believe that would be an issue, but that would not be the case in some gambling and cryptography applications
  • Where true random numbers are required, a random number service and/or specific random number hardware is used. I don’t believe this is required in this application [https://api.random.org/json-rpc/1/]
  • People have been fairly scathing of Java Random, identifying that the current implementation only uses 17 bits of entropy from the initial seed (i.e. 1 in 131072 starting points) and demonstrates repeating patterns after low numbers of calls to generate numbers (of the order of ~50000). Statistical tests have been used to identify that it doesn’t produce very good random numbers [http://www.alife.co.uk/nonrandom/]
  • SecureRandom is a drop in replacement for Random and does produce much better random numbers that pass Statistical tests, but it is 60 times slower than Random [https://dzone.com/articles/java-programmer%E2%80%99s-guide-random]. In the way that the Batch report is using random numbers, it appears that of the order of 500 random numbers would be required. A quick test shows that 500 SecureRandom numbers only takes ~15 Milliseconds to calculate. Which isn’t going to significantly affect the time taken for a result.
  • There are libraries that support alternative ways of creating random numbers. These produce good quality pseudo random numbers in less time than the standard Java implementations, but would require additional libraries and dependencies to be managed. In this case I do not believe it would be worth the additional effort. [http://maths.uncommons.org/] [https://www.bouncycastle.org/java.html]


Based on the number of random numbers being generated (maximum of around 500 at a time), Random may well be sufficient.  But given the minimal cost of using SecureRandom it may be worth converting to that instead, to remove even a small possibility of concern.

Monday 30 November 2015

The AWS Well-Architected Framework

A repost from InfoQ but with a good PDF for later reading.

Amazon has published the AWS Well-Architected Framework (PDF), a guide for architecting solutions for AWS, with design principles that apply to systems running on AWS or other clouds.


Amazon has based the AWS Well-Architected Framework on four pillars and a number of design principles as outlined in short bellow.

Security. 

According to Amazon, security in the cloud regards 4 areas - Data Protection, Privilege Management, Infrastructure Protection, Detective Controls – and they recommend the following design principles to strengthen the security of a system:
  • Apply security at all levels 
  • Trace everything 
  • Automate responses to security events 
  • Secure the system at the application, data and OS level 
  • Automate security best practices 

Reliability. 

This pillar represents a system’s ability to “recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.” The areas covered by reliability are Foundations, Change Management and Failure Management, and the paper recommends the following design principles:
  • Test recovery procedures 
  • Automatically recover from failure 
  • Use horizontal scalability to increase availability 
  • Automatically add/remove resources as needed 

Efficiency. 

This is about efficient use of CPU, storage, and database resources. It also covers the space-time trade-off, i.e. consuming more memory and disk space to solve a problem quicker or using less resources but solving it in more time. The design principles recommended are:
  • Use advanced technologies 
  • Deploy the system globally for lower latency 
  • Use services rather than servers 
  • Try various configurations to find out what performs better 

Cost Optimization. 

This is evidently about optimizing costs, eliminating unneeded or suboptimal resources. Cost optimization should consider matching supply with demand, using cost effective resources, keeping an eye on expenses, and lowering the costs over time. This can be done by:
  • Transparently attribute expenditure 
  • Use managed services 
  • Buy computing resources in the cloud rather than hardware 
  • Use the cloud for its pay-as-you-go policy 
  • Do not invest in data centers 


The framework includes a list of questions to be used when assessing a proposed architecture, such as “How are you encrypting and protecting your data at rest?” or “How are you planning your network topology on AWS?”. The authors also provide their recommendations for addressing each of the problems mentioned in these questions, some of them applying only to AWS, others being valid for any cloud computing architecture.


This article has extracted the main points from the 56-pages whitepaper on architecting solutions for the cloud. For a detailed explanation of all the best practice.

Tuesday 28 July 2015

Version Numbering - Redux

The issue

I have been looking at version numbering for a project where the developers had stuck at 0.0.1-SNAPSHOT for 12 months and were starting to encounter issues with getting the correct JARS for their projects from their binary repository.

The solution was to use the features that are present in Maven and Jenkins to assist them in their processes.

I covered the basics of version numbering in an other post (May 2012), but that just gives the version strategy for Apache projects. Since then I have come across Semantic Versioning which is a great a almost definitive source on how you should work with versions.
However this does not cater for the way Maven actually treats version numbers, it is just compatible with it..

A good version number has a number of properties:
  • Natural order: it should be possible to determine at a glance between two versions which one is newer
  • Maven support: Maven should be able to deal with the format of the version number to enforce the natural order
  • Machine incrementable: so you don't have to specify it explicitly every time

What does Maven do?

For reference, Maven version numbers are comprised as follows: <MajorVersion>.<MinorVersion>.<IncrementalVersion>-<BuildNumber | Qualifier>. Where MajorVersion, MinorVersion, IncrementalVersion and BuildNumber are all numeric and Qualifier is a string. If your version number does not match this format, then the entire version number is treated as being the Qualifier. [See]
If all the version numbers are equal, the qualifier is compared alphabetically. "RC1" and "SNAPSHOT" and sorted no differently to "a" and "b". As a result, "SNAPSHOT" is considered newer because it is greater alphabetically. See this page as a reference.
The issue on may projects is how to manage the versions and how to do so with out breaking the Maven format which will cause Maven to treat your versions as just the Qualifier which is a string (not good). Note that a.b.c-RC1-SNAPSHOT would be considered older than a.b.c-RC1, because of text comparisons.

What to use as an Incremental version number?

I think it is reasonably straight forwards for a project to determine the major and minor version as they often come directly from the business drivers for the project.
The Incremental version can give some issues as the business may not be interested and it is the developers that need it for tracking purposes.
Therefore the incremental number has be be meaningful to them.
So usefull numbers could be the database schema version, the sprint number the feature set that is being implemented (although this can be hard if you have several teals working in parallel).

The simplest way is to start at 1 and for the team leads to determine when to increase the number.

Whether to use as a Qualifier or a BuildNumber?

There appear to be several schools of thought on this and Maven simply fits with them all.
The Apache method is to not have either and using the SNAPSHOT qualifier allows you to follow this pattern.
However, SNAPSHOT does not allow your developers to know what to use the version for.
JBoss has qualifiers (alpha[n], beta[n], release candidate 'CR[n]' and Final) with optional numbers. [See].
The OSGI specification adds a further complication as does the eclipse numbering which are
MajorVersionMinorVersionIncrementalVersion.TIMESTAMP[-Mn]
MajorVersionMinorVersionIncrementalVersion.CR[n]
MajorVersionMinorVersionIncrementalVersion.Final
There are only two qualifiers.  The first one is for the milestone releases, and the qualifier starts with a numeric timestamp.  The project can use a timestamp as shown below as it will sort according to the compareTo method of the String class just like any other qualifier. ie YYYYMMDD
Optionally, if for some reason there is a need to make two releases in the same day, you can add a sequence number to the end of the timestamp. The next part of the qualifier is the milestone number, where M stands for milestone, and n is the milestone number.
After all the milestone releases that have added the various functional pieces are complete, and the project and any sub-projects that are integrated are at least at a candidate release stage, then a CR release will follow.  Just like in the traditional model, there may be multiple CR releases depending on the feedback from the community.

It is the above approach minus the milestone number I would advise (assuming the 'IncrementalVersion' is controlled by the development team.

Setting up Maven

Making sure your version numbers are incremented can be a pain in the arse but there is  a plugin for Maven that helps you mange this.

The Maven 'Release' Plugin

The Release plugin [See] is helpful but not essential. It is used to help a developer release a project with Maven, saving a lot of repetitive, manual work. Its best usage is to allow the developer to update their version number with out effort and correctly.

It is added to the maven project as follows:

 <project>  
     ...  
     <build>  
         <plugins>  
             ...  
             <plugin>  
                 <groupId>org.apache.maven.plugins</groupId>  
                 <artifactId>maven-release-plugin</artifactId>  
                 <version>2.5.2</version>  
             </plugin>  
             ...  
         </plugins>  
         ...  
     </build>  
     ...  
 </project>  

easy!

This allows the developer to issue a command as follows:
 mvn -B release:update-versions  
... and the version will be updated to the next increment. They then only need to commit it as part of their code.
It is always the last part of the version number that is incremented and it even works if you have a text qualifier such as CRn (see above).

Maven Versions Plugin

This plugin is much more useful when it comes to controlling your release via your CI server. [See]
(In these examples I'm going to quote Jenkins but this process should work for others.)
(I am also not sure if Maven 3.1+ doesn't include this plugin.)

Unlike the previous plugin that increments the version, this will allow a specific version number to be set in the POM, like this:
 mvn versions:set -DnewVersion=0.1.1-RC1  

Where '0.1.1-RC1' is an example of a version number.

The process

Now to tie this together.
The process we want is:
  1. Jenkins checks out the latest revision from SCM (Subversion, Mercurial, Git, ...)
  2. Release Plugin transforms the POMs with the new version number
  3. Maven compiles the sources and runs the tests
  4. Release Plugin commits the new POMs into SCM
  5. Maven publishes the binaries into the Artifact Repository
Prerequisites: Jenkins with a JDK and Maven configured, and both the Git and the Workspace Cleanup, and Parameterized Trigger Plugin  plugins installed.
We're going to start by creating a new Maven job and making sure we have a fresh workspace for every build:


After assigning you SCM, the next step is to set the version upon checkout.
A good version number is both unique and chronological. We're going to use the Jenkins BUILD_NUMBER (the current build number, such as "153") as it fulfills both these criteria wonderfully.
We could use BUILD_ID which is such as "2005-08-22_23-59-59" (YYYY-MM-DD_hh-mm-ss).
or even the Git commit number using GIT_REVISION.
This is configured as follows:



and in the build step:

And that's it! Every time this job is run, a new release is produced, the artifacts will be deployed and the source code will be tagged. The version of the release will be the BUILD_NUMBER (or GIT_REVISION) of the Jenkins project. Nice and simple.

Friday 26 June 2015

Worth a read: UX is not UI

I love UX and spend hours thinking about how to make a great UX (no really).
So when I read this article I just had to share.
UX has become a neologism. When something has “good UX” it is an implied meaning of having the core components of UX (research, maybe a persona, IA, interaction, interface, etc etc…). It’s not really necessary or desirable to tack the word design onto the end anymore. It’s a distraction and leads people down a parallel but misguided path… the path to thinking that UX = User Interface Design.
See: http://www.helloerik.com/ux-is-not-ui  and the graphic from the article here.

Tuesday 9 June 2015

Cool Tool of the Day: OutlookGoogleSync

A nice little tool to get your work outlook appointments into your Google calendar.

http://outlookgooglesync.codeplex.com/

It only pushes data from Outlook to Google but for those like me who only want an update on my phone it is more than enough.
 
According to the project it is a...

A small tool to keep the Google calendar in sync with the Outlook calendar (one way: Outlook -> Google).
Doesn't need admin rights and works behind a proxy.
Works with Outlook 2003 and newer.

There is an alternate http://calendarsyncplus.codeplex.com/

Thursday 4 June 2015

The lost world of java.nio

The O'Reilly book on the subject
I was ask today to help some colleagues with a comms issue on a Telephony application.
When they ask a whole set of questions and problems started to come to mind. What happens with the Garbage Collect, how fast is the code, does it mess packets.

So I had a little look around for some information on Java NIO, it seems to have been kicking butt since Java 1.4 and we have all ignored it.

Indeed for Java 7 it has a new friend … java.NIO.2.

I promised to do some research so here are my results:

Good bye commons-io. You’ve served me well, but I have a new ‘friend’!
In fact if you look at the Channels package (http://docs.oracle.com/javase/7/docs/api/java/nio/channels/Channels.html) it will allow you to convert to your familiar Stream classes while still sending data via the nio classes to your socket (AsynchronousSocketChannel).

And if we really-really want to know how to do this stuff:


NIO.2 Cookbook - http://www.javaworld.com/article/2882984/core-java/nio2-cookbook-part-1.html

I may never use Java.IO again!

A good book : Software Measurement and Estimation: A Practical Approach

Estimating the size of a software development is a black art.
This book attempts to put some science onto the topic.

It attempts to give you methods for determining how big your project is compared to others, what is your code quality like and how efficient are your developers.

It contains many gems such as the relative inefficiencies of coding in different languages. For example VB has a "gear ration" of 42 while java & C# have a GR of 59 which means you must write 40% (59/24=1.41) more code to perform the same task in C# or Java. Which shouldn't come as a surprise to any one.

It even suggests that once you get a couple of projects into a development you might be able to develop a ratio between your specification size (user stories) and the number of NLOC (none-commented lines of code). A wooly estimate but perhaps a usefull one if pushed for an estimate, by a manager.

It goes on to argue that while LOC can be measured easily it is Functional Point Analysis (FPA) which will perhaps yield the best estimation results.
To me this is a better approach as it ties in with the Agile use of User Stories which can be estimated into their relative difficulties. However in Scrum the developers tend to use planning poker which is a method I am not personally in favor of as I prefer a more empirical approach and the use of historical data (see this post on using historical data).

The next stage this book suggests is then converting your Functional points to LOC (which is useful if parts are using different tech) and comparing it to your historical results ie your productivity.

The point here is DON'T GUESS!

Work it out!


"When you can measure what you are speaking about, and can express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind." - Lord Kelvin


"Science/It works bitches." - xkcd

Finally some tools to calculate LOC:
And some books:

Software Quality Metrics Overview

Implementing Automated Software Testing - Continuously Track Progress and Adjust Accordingly

A final thought on functional point estimation.

In my experience, the size and complexity of a project is a function of a quite limited number of parameters, that compares from a project to another, whatever the sector or domain area.

Each application usually has a bit of specificity in some way (workflow complexity, external connection challenges, dynamic user interfaces, customized advanced business calculations or multi-tenancy for SaaS applications), so you can determine the impact that element has in terms of development cost is relatively constant from a project to another.

When determining the complexity of the project you combine several complexity factors, you may also find ratios that help you deal with this complexity and you should be able to evaluate a project based on the following numbers:
  • number of entities
  • number of simple business rules
  • number of advanced business rules (cross-entity for example)
  • number of user interface elements (screens, web pages)
  • number of reports
  • number of external interfaces
  • number of batch calculations or processes
  • number of technology variations for components (rich-client, web page, mobile, database providers, cloud systems
These numbers must then be combined with an uncertainty level to create a range for use with three point estimation.

But more on that an other day.!



Estimation by stuffing things into boxes

I liked this article by Johannes Brodwall so much I thought I would steal it.
The original is here.

I’ve (sic:Johannes Brodwall) started using an approach for software project estimation that so far is proving to be fairly transparent, quick and reliable. I’ve observed that within a reasonable degree of variation, most teams seems to complete about one “user-relevant task” per developer per calendar week.
There are so many theoretical holes in my argument that there’s no point trying to cover them all. The funny thing is that it seems to work fairly well in practice. And to the degree that it’s dirty, it’s at least quick and dirty. The only thing I will address is that one of these “user relevant tasks” is smaller than a typical application feature.
Most importantly: Most teams never get it right on the first try. Or they spend too long gold-plating everything. Or both.
This article shows an example of estimating a fictive project: The Temporary Staffing System.

The high-level scope

Let’s say that our organization has come up with the following vision:
For a temporary employment agent who wants to match candidates
to client needs, the Temporary Staffing System is an
interactive web application, which lets them register and
match candidates and positions. Unlike competing systems
this lets us share selective information with our clients.
We come up with the following flow through the application:
  1. A new company wants to hire a skilled worker for a temporary position
  2. Administrative user adds the client details to the system
  3. Administrative user adds client logins to the system
    (perhaps we also should let the clients log in with LinkedIn etc?)
  4. Client logs into the application and completes new position
    description, including skill requirements
  5. Temp agency adds a worker to the system
  6. Temp agency proposes the worker to a position registered by a client
    (in the future, the worker may register themselves!)
  7. Client gets notified of new proposals (via email)
  8. Client views status of all open positions in the system
  9. External to the system: Client interviews candidate, request further
    information and makes a decision whether to hire or not
  10. Client accepts or rejects the worker in the system
  11. As worker performs work, they register their time in the system
  12. At the end of a billing period, the system generates billing information
    to accounting system
  13. At the end of a salary period, the system generates salary information
    to the accounting system
Some of these steps may be one user story, some may be many.

The top of the backlog

We choose some of the most central parts of the scope to create the beginning of the backlog. In order to accommodate for the learning as we go along, the first draft of our backlog may look like this:
  1. Experimental create open position
  2. Experimental list positions
  3. Simplified create open position
  4. Simplified list positions
  5. Complete create open positions
  6. Complete list positions
An “experimental” version of a story is a functionality trivial version that touches all parts of the technology. In the case of these two stories, perhaps we have the application leave the logged in client as a hard coded variable. The story may only include writing some of the fields of the positions, maybe only title and description.
The Simplified version may add more complex properties, such as skills from a skill list or it may add filters to the list.
The complete version should be something we’re prepared to put in front of real users.
By revisiting a feature like this, we have the chance to get the feedback to create a good feature without gold-plating.

Continuing the backlog

We add enough of the other stories to the backlog to cover an interesting part of the scope:
  • Basic create client account
  • Complete create client account
  • Basic login admin user
  • Basic login client user
  • Complete login client user
  • Basic add worker
  • Complete add worker
  • Basic propose worker for position
  • Complete propose worker for position
  • Complete confirm worker for position
  • Basic enter timesheet (in this version temp agency enters on behalf of worker)
  • Experimental billing report
  • Basic billing report
  • Basic salary report
This functionality should be enough to have a pilot release where some clients and workers can be supported by the new system. Or we may complete the backlog with complete versions of all functionality, worker login and perhaps a polished version of a feature or two.

Adding the non-functional tasks

There are some tasks that we want to plan some extra time for. I generally find that many of these tasks are tasks that customers understand quite well:
  • Attend training on CSS (the team is rusty in design skills)
  • Basic layout and styling of web pages
  • Complete layout and styling of web pages
  • Polished layout and styling of web pages (they want it really nice)
  • Locate slowest pages and make some performance improvements
  • Deploy solution to target platform
  • Deploy demo version to wider set of stakeholders
  • Deploy pilot version
  • Exploratory test of complete flow

Planning the project

In this example project, we have five team members plus a coach/project manager on half-time. Since our team will be working in pairs, we want to work on three functional areas per week. This way, we can avoid huge merge conflicts. The team agrees to plan for five stories per week, but only three the first week, because things generally go slower. Here is the top of the completed backlog:
  • Week 1: Experimental create open position
  • Week 1: Experimental list positions
  • Week 1: Attend training on CSS
  • Week 2: Simplified create open position
  • Week 2: Simplified list positions
  • Week 2: Basic create client account
  • Week 2: Basic layout and styling of web pages
  • Week 3: Basic login client user
  • Week 3: Deploy solution to target platform
  • Week 3: Basic add worker
  • Week 3: Basic propose worker for position
  • Week 3: Basic enter timesheet (temp agency enters on behalf of worker)
  • Week 4: Experimental salary report
  • Week 4: Complete layout and styling of web pages
  • Week 4: Complete create open positions
  • Week 4: Complete list positions
  • Week 4: Deploy demo version to wider set of stakeholders
  • Week 6: Exploratory test of complete flow
  • Week 7: Deploy pilot version

Presenting the plan

Working through the list gives us a complete timeframe of just over 6 weeks for full feature set for the pilot release. To cover realities of life, we probably want to plan for at least one week of slack or even more, depending on the strength of our commitment and the consequences of being wrong.
This gives a plan indicating 7 weeks times 5 people at 40 hours per week plus a 50% project manager at 20 hours per week or a total of 1540 hours.
I generally find that after a pilot release (or even before it), things change a lot. So I don’t invest much time into planning this.

Tracking the development

The true strength of a plan like this appears when you start running the project. Each week, the team will report on which stories they completed. This allows us to adjust the plan to actual progress.
On the flip side, the weekly planning comes down the team and the customers agreeing on the definition of a story. The vagueness of “basic add worker” is by design! But the team should agree on what they mean by “experimental”, “simplified”, “basic”, “complete” and “polished”.

Conclusions

In this article, I have showed a quick and accurate way of coming up with a project forecast, complete with time and cost estimates. It’s easy to see and react to deviations from the forecast.
A few critical critical observations support this methodology:
  • I never believe a developer estimate other than “by the end of the day” or “by the end of the week”. (Don’t talk to me about hours!)
  • Estimating in hours is a silly way to get to project costs. Any hour-based estimate is always prodded and padded before magically turning into cost. Instead, estimate number of features, feature per week and cost by week.
  • Visiting a feature multiple times lowers total cost due to less gold-plating and investment of in poorly understood areas. It also improves the success of the final feature
  • The ambition of a feature (that is, how many times we will visit it) is a more reliable indication of cost than developer gut feeling
I’ve left many questions on the table, for example: What about architecture? What is meant by a “simplified” user story? How to deal with deviations from the forecast? Feel free to grill me for details in the comments to the article.
“So what will it cost?” Using this simple method to lay out your project forecast week by week, you can give a better answer next time someone asks.
Published at DZone with permission of Johannes Brodwall, author and DZone MVB. (source)

Code recipe: Using Maven to launch a Java Process

GET JAVA & MAVEN

apt-get maven2
apt-get java

GET YOUR PROJECT

git init <repo> cd <repo> git remote add -f origin <url>
git config core.sparseCheckout true

echo "some/dir/" >> .git/info/sparse-checkout
echo "another/sub/tree" >> .git/info/sparse-checkout
git pull origin master
mvn clean compile
mvn exec:java
(With a setting in the project "exec.mainClass")


yes!



nohup mvn exec:java &


Thursday 21 May 2015

Cool tool of the day:

Why is my Eclipse so Slow?
Optimizer for Eclipse

As much as many devs will say that eclipse is the best IDE, it still sucks when it comes to starting up.
Luna is an improvement over some of the earlier versions but it can still leave you waiting for it to start.

This is a topic I have visited before (Setting up Eclipse check list (Part 1)Eclipse Configuration settings) but now comes "Optimizer for Eclipse". A plugin by  Zeroturnaround (the people who brought us JRebel) so it has an impressive pedigree.

The plugin works by tuning the startup settings for eclipse. Again this can be done by hunting these settings down but its nice to have them all in one place.
Once installed it prompts you to select a number of options, restarts eclipse and tells you if you saved any time. It then disappears to site under the Help menu.

On my first use of it I turned off class validation and found that Eclipse started 1 full minute faster.

So I'm pleased!

How to install

Either install using:
  • Marketplace
    Open Help → Eclipse Marketplace…
    Search for Optimizer for Eclipse.
    Press Install.
  • Update site
    Open Help → Install New Software…
    Add this repository.
    Complete the plugin installation.

Wednesday 8 April 2015

Cool tool of the day: jsonschema2pojo

So we all love JSON?

AND I love Java but creating the Java classes to serve or parse your JSON is a pain.

Especially when the JSON alters!

So use jsonschema2pojo!

It comes with a handy code generator Maven plugin.

The steps are as follows:

  1. Create a Java project with a maven POM;
  2. Add the code generator as seen in the documentation;
  3. Capture your JSON;
  4. Create the JSON schema in the folder specified in your POM using jsonschema.net;
  5. run mvn generate-sources;
Done!

Tuesday 31 March 2015

How to Tune Java Garbage Collection

How to Tune Java Garbage Collection

This is an excelent and very clear article on Java JVM tuning.

Think about the fundamental cause of GC tuning. The Garbage Collector clears an object created in Java. The number of objects necessary to be cleared by the garbage collector as well as the number of GCs to be executed depend on the number of objects which have been created. Therefore, to control the GC performed by your system, you should, first, decrease the number of objects created.
 The following table shows options related to memory size among the GC options that can affect performance.
Table 1: JVM Options to Be Checked for GC Tuning.
ClassificationOptionDescription
Heap area size-XmsHeap area size when starting JVM
-XmxMaximum heap area size
New area size-XX:NewRatioRatio of New area and Old area
-XX:NewSizeNew area size
-XX:SurvivorRatioRatio of Eden area and Survivor area
I frequently use -Xms-Xmx, and -XX:NewRatio options for GC tuning.  -Xms and -Xmx option are particularly required. How you set the NewRatio option makes a significant difference on GC performance.Some people ask how to set the Perm area size? You can set the Perm area size with the-XX:PermSize and -XX:MaxPermSize options but only when OutOfMemoryError occurs and the cause is the Perm area size.


The full article can be founs at : http://www.cubrid.org/blog/dev-platform/how-to-tune-java-garbage-collection/

Slides on HTTP authentication methods

HTTP authentication methods

These are as usefull set of slides on Http authentication.

Click to view "HTTP authentication methods"

See: http://talks.codegram.com/http-authentication-methods

Monday 30 March 2015

Cool tool of the day: zapier

According to its website zapier does the following:
Unlock the Hidden Power of Your Apps
Zapier connects the web apps you use to easily move your data and automate tedious tasks.
I've used this tool to move data between PivotalTracker and JIRA and found that it was very, very easy to set up.
The cost of this service (https://zapier.com/app/pricing)  rins from $0 to $125/month which compared to development costs is epic.
The variety of services that can take information from one place to another is truley epic (https://zapier.com/zapbook/).
Considering this tool can take information from places such as Amazon SQS and Amazon RDS the scope for setting up easy process creation is huge!

I will return to this blog post at a later date.

But until then their promo vid ...

Friday 27 March 2015

Issue with Cordova 4.3.0 behind a corporate firewall & Proxy

While working with npm and cordova I have encountered an issue with Cordova when running it behind our corporate firewall & proxy.
Initially we thought the issue was related to the issue described here: http://wil.boayue.com/blog/2013/06/14/using-npm-behind-a-proxy/.
But we discover that while npm could be fixed as described here this didn’t fix Cordova.

We have configured npm to point at our Sonatype repository but we discovered that Cordova does not pick up the ‘registry’ setting from npm as it has the registry hard coded in several places.
ie.  In lazy_load.js

Line #148 is the original line which we have had to modify with line #149.


This issue manifests when we do a “cordova platform add ios” for example.

You can go throught all the js files and replace all the lines where it occurs but only lazy_load.js seemed to be the problem.

I would also update the 'version' entry in your cordova package.json file as well to ensure that you know you have patched your instalation.

See: https://stackoverflow.com/questions/29306422/issue-with-cordova-4-3-0-behind-a-corporate-firewall



Wednesday 18 March 2015

Set up Node for Eclipse on Windows

A quick note on setting up The Node JS plugin for eclipse.
The plug in is fine but it is often better to have a client eclipse & a server eclipse at the same time & these are easier if they have different tool sets installed.

  1. Install ENIDE - Actually NODEECLIPSE http://www.nodeclipse.org/enide/
    Download and install in the same way as any usuall eclipse install. (Unzip to your selected folder)
  2. Install note JS - Down load the Node installer:
    https://nodejs.org/download/
    Follow isnstructions on the screens
  3. Configure your windows path - Add an envirion ment variable NODE_HOME to be the path you just installed to to.
    Add the text ;%NODE_HOME% to your PATH environment variable.
  4. Configure Path to NodeJS in eclipse - In Node Eclipse (ENIDE) add the new node instaltion under
    Preferences-->Nodeclipse
    Edit box "Node.js path";
And that is about it.

Tuesday 10 March 2015

Cool tip of the Day: Dark Theme, Top Eclipse Luna Feature

Configure the Dark option!
In a previous post I talked about setting up a more relaxing theme for Eclipse.
Well the LUNA release of Eclipse has gone one better and it now comes as standard.

To use the dark theme, go to Preferences -> General -> Appearance and choose ‘Dark’.

The theme extends to more than just the Widgets. Syntax highlighting has also been improved to take advantage of the new look. However I still like the themes from Eclipse Color Themes plugin as it allows a little more flexibility.

This is a great new additional feature.

Friday 20 February 2015

Spring Boot Rocks!

 Spring Boot ...
In case you have missed this wonder ...
Spring Boot makes it easy to create stand-alone, production-grade Spring based Applications that you can "just run". We take an opinionated view of the Spring platform and third-party libraries so you can get started with minimum fuss. Most Spring Boot applications need very little Spring configuration.
It is really a great tool and I can't recommend it  enough.
When coding a micro-service or a single context application I can't think of an easier way of hosting your spring context.

Developers need tools and technology that allow them to get started quickly with the least amount of friction. They also demand modular, lightweight and opinionated technology to optimize productivity. Spring Boot takes aim at the very issue of getting up and running quickly while dramatically improving development velocity.

To get started with Spring Boot, you can add to a POM file the following settings:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>1.2.1.RELEASE</version>
</parent>
<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
</dependencies>

Or point your browser at Spring Initializer -http://start.spring.io. Spring Initializer provides a web based interface allowing developers to select an application/workload and relevant dependencies. It will then generate a starter application with build support (supports Maven POM, Maven project, Gradle Config, Gradle project).

Screenshot of Spring Initializer:



Thursday 22 January 2015

What's a data View? The Model View ViewModel



In an earlier post on a MVC variant called VESPA I discussed separating out the the model and the controller into Store and Entities, Actions and Presenter respectively.

In a discussion on this approach the MVVM pattern was discussed and for me it helped with a problem I had with VESPA.

But first a recap of MVC before talking about MVVM.

The basics of MVC

The usual MVC definition (of which VESPA is a sub-definition) divides the application into three kinds of components, the model–view–controller design defines the interactions between them.
  • A controller can send commands to the model to update the model's state (e.g., editing a document). It can also send commands to its associated view to change the view's presentation of the model (e.g., by scrolling through a document). 
  • A model notifies its associated views and controllers when there has been a change in its state. This notification allows the views to produce updated output, and the controllers to change the available set of commands. In some cases an MVC implementation might instead be "passive," so that other components must poll the model for updates rather than being notified. 
  • A view requests information from the model that it uses to generate an output representation to the user. 

It is the View in VESPA that I have been having an intellectual problem with, which is where I think MVVM helps.

The basics of MVVM

Broadly speaking, the model-view-viewmodel pattern attempts to gain both the advantages of separation of functional development provided by MVC as well as providing the advantage of abstracting data into the form that is recognisable to the end user.

The components of MVVM are as follows:



  • Model exactly as MVC 
  • View exactly as MVC, the code that actually constructs the display, and contains the commands that create the application. 
  • View Model this is the binding between View and Model when the Model does not reflect the data that the users wish to see. 

A criticism of the pattern comes from MVVM creator John Gossman himself, who points out that the overhead in implementing MVVM is "overkill" for simple UI operations. He also states that for larger applications, generalizing the ViewModel becomes more difficult. Moreover, he illustrates that data binding in very large applications can result in considerable memory consumption.

I would agree and this has lead me to discount MVVM until this point.

This is because there is often a 1-1 mapping between data on the View and data in the Model. This is certainly true of CRUD type screens.

The problem starts when users demand changes or the developers alter the model.

At this point the introduction of a view-model is needed.

My adaptation of MVVM to VESPA is to add a further subdivision to the View sooner than the Model.

I believe that the model-views are part of the View as they share the dynamism of the view code and should not be bundled into any projects or build components that belong to the model.

A good view-model object should appear to the View developer as just an other entity object.
How to define your Model-Views

If the Store objects bind and map to the physical data storage; and the Entities map to the logical structure that is the normalised data the application needs to hold.

The model-view maps to the concept and is an abstracts implementation details to focus on the entities and their relationships and properties that are elicited in the problem domain. It’s the ONLY part of the design best suited for communication with stakeholders in general.

So how to do the model-views ... just ask the stake holders what they want to see.

Then sit back and work out how to plug the entities into them.

The best part is you can mock the data in the model-view so the UX guys can work their magic but that is a different post.