Friday, 11 April 2014

Cool tip of the day: Random dummy images

The above is a random image, just refresh to see it change. 
Some times you need images in your project as you would with Lorum Ipsum but this is for images.

http://lorempixel.com/

Love it!

Cool tip of the Day: Flat UI

Another nice CSS framework to do create a flat look to your web site.

Not used it yet but it looks nice can't comment on the CSS.

http://designmodo.github.io/Flat-UI/

Cool tip of the day: Heartbleed exploit

As you may have heard there is a serious problem with SSL.
But what to do about it?
The majors seem to be running around fixing things but what do us minows do?

I would suggest running this check first ... Heartbleed server test (http://filippo.io/Heartbleed/).

If you are not sure what this all means I found this link helpful

And in closing:

Wednesday, 9 April 2014

Cool tip of the Day Granule

The load time of your web page is often down to the number JS and CSS files that your page includes.
There is a tension between developers and sysadmins, developers want to keep everything simple and in separate editable files while the sysadmins want a nice simple & fast load time.
If you see some of the suggestions on optimising your site (see: http://developer.yahoo.com/performance/rules.html) the suggestion is to keep things together.

80% of the end-user response time is spent on the front-end. Most of this time is tied up in downloading all the components in the page: images, stylesheets, scripts, Flash, etc. Reducing the number of components in turn reduces the number of HTTP requests required to render the page. This is the key to faster pages.
One way to reduce the number of components in the page is to simplify the page's design. But is there a way to build pages with richer content while also achieving fast response times? Here are some techniques for reducing the number of HTTP requests, while still supporting rich page designs.
Combined files are a way to reduce the number of HTTP requests by combining all scripts into a single script, and similarly combining all CSS into a single stylesheet. Combining files is more challenging when the scripts and stylesheets vary from page to page, but making this part of your release process improves response times.
SO how to solve this ... for JSP/JAVA devs use Granule (https://code.google.com/p/granule/).
To quote there site...
Granule is an optimization solution for Java-based web applications (JSP, JSF, Grails). It combines and compresses JavaScript and CSS files into less granulated packages, increasing speed and saving bandwidth.
The granule solution includes:
  • JSP Tag library. You just need put the tag around your StyleSheets and JavaScripts to compress and combine them.
  • Ant task, to include pre-compressing in your build scripts.
HAY-PRESTO! Every one is happy.




Friday, 7 March 2014

Code School

Ok I've tried to get to grips with GIT ... finally! and I have been shown a realy  nice learning resource.

Code School!


Please take a look as it is quite fascinating ... http://try.github.io

Cool tip of the day: Come the Dark side! ... Eclipse Theme colours

After some eye dazling hours working with the standard black on white editor in Eclipse and witnessing the relativly calm appearance of Visual Studios Dark theme I decided to see if I could do the same with eclipse.
I discovered that it is relativly straight forwards and lots of otherpeople think the same way. I have included links to the plugins you need and a list of the other blogs and posts that will be helpfull should you go over to the dark side!

  1. Download the ‘Dark Juno’ theme 
  2. https://github.com/rogerdudler/eclipse-ui-themes.
    This will change the color of all eclipse view and toolbars to a dark theme. However we still have to change the color theme of the editor.
  3. Get the Eclipse Color Themes plugin
    http://www.eclipsecolorthemes.org/
    Install a dark theme for the editor from there.
    I like the Retta theme but its your choice.
And that should be it!

The other references are:

Wednesday, 26 February 2014

Cool tool: IKVM.NET

[CAVEAT:NOT TRIED JUST SOUNDS COOL!!!]

IKVM.NET is a JVM for the Microsoft .NET Framework and Mono. It can both dynamically run Java classes and can be used to convert Java jars into .NET assemblies. It also includes a port of the OpenJDK class libraries to .NET.

http://sourceforge.net/projects/ikvm/

Tuesday, 18 February 2014

Cool Tip of the day: Java regex tester

There is something of the dark-arts about REGEX that after years of using it I just can't get into my head.

So I was very pleased to find this nice little site:

http://java-regex-tester.appspot.com/

It has the options of defining the text and the REGEX in a very simple and immediate way.

A second and more in depth tester can be found at:

http://www.regexplanet.com/

This version allows you to build RegEx for many code types inclding Java.

So if you use it with http://txt2re.com/ which gives you the option of pasting your text into the site and picking the bit you want ... you soon have your regex!

Have fun!

Friday, 7 February 2014

Scalability

For the second time in two months I have been asked about "Big-O" functions when talking about software efficency.
You may have seen people quote a metric for the efficency of a piece of code as O(n) or O(n^2) etc. where 'n' is the amount of data you have got.
But can you recall waht it means ... have you seen it sicne your comuting course?
Well I hadn't and it is along time since then (don't ask)!

Due to experience I know that a nested set of loops is going to be O(n*m) but this stuff realy counts now that we are increasinly seeing huge data sets.

I have regularly seen processes rescently that sill take tens of minutes to run and that is despite multiple processors and fast ones.

So what is going on?

Well lets look at an over view of the basics of Big O notation (plagarised from the Perlmonks)

Common Orders of Growth
O(1) is the no-growth curve. An O(1) algorithm's performance is conceptually independent of the size of the data set on which it operates. Array element access is O(1), if you ignore implementation details like virtual memory and page faults. Ignoring the data set entirely and returning undef is also O(1), though this is rarely useful.
O(N) says that the algorithm's performance is directly proportional to the size of the data set being processed. Scanning an array or linked list takes O(N) time. Probing an array is still O(N), even if statistically you only have to scan half the array to find a value. Because computer scientists are only interested in the shape of the growth curve at this level, when you see O(2N) or O(10 + 5N), someone is blending implementation details into the conceptual ones.
Depending on the algorithm used, searching a hash is O(N) in the worst case. Insertion is also O(N) in the worst case, but considerably more efficient in the general case.
O(N+M) is just a way of saying that two data sets are involved, and that their combined size determines performance.
O(N2) says that the algorithm's performance is proportional to the square of the data set size. This happens when the algorithm processes each element of a set, and that processing requires another pass through the set. The infamous Bubble Sort is O(N2).
O(N•M) indicates that two data sets are involved, and the processing of each element of one involves processing the second set. If the two set sizes are roughly equivalent, some people get sloppy and say O(N2) instead. While technically incorrect, O(N2) still conveys useful information.
"I've got this list of regular expressions, and I need to apply all of them to this chunk of text" is potentially O(N•M), depending on the regexes.
O(N3) and beyond are what you would expect. Lots of inner loops.
O(2N) means you have an algorithm with exponential time (or space, if someone says space) behavior. In the 2 case, time or space double for each new element in data set. There's also O(10N), etc. In practice, you don't need to worry about scalability with exponential algorithms, since you can't scale very far unless you have a very big hardware budget.
O(log N) and O(N log N) might seem a bit scary, but they're really not. These generally mean that the algorithm deals with a data set that is iteratively partitioned, like a balanced binary tree. (Unbalanced binary trees are O(N2) to build, and O(N) to probe.) Generally, but not always, log N implies log2N, which means, roughly, the number of times you can partition a set in half, then partition the halves, and so on, while still having non-empty sets. Think powers of 2, but worked backwards.
210 = 1024
log21024 = 10
The key thing to note is that log2N grows slowly. Doubling N has a relatively small effect. Logarithmic curves flatten out nicely. It takes O(log N) time to probe a balanced binary tree, but building the tree is more expensive. If you're going to be probing a data set a lot, it pays to take the hit on construction to get fast probe time.
Quite often, when an algorithm's growth rate is characterized by some mix of orders, the dominant order is shown, and the rest are dropped. O(N2) might really mean O(N2 + N).
 
This is quite nice as you can see it gives you an idea of the efficency from just the SHAPE of the curve an not the absolute values.

Now those nested loops are understandable in context and can be expressed as a O(n^2) or O(n^3) curve.

But what if your program is using a set of code from the Java collections package for example?
You don't want to go digging through it to just get an idea of the efficency.
Fortunatly there is a cheet sheet that you can wander over to and pull out the efficencies of those off the shelf fuctions you are using.

See: http://bigocheatsheet.com/.

For example you have a HashSet with 'n' elements that is compared to 'm' data itesm s what is the efficency?

The worst case search for a hashset is O(n) and if you are having to search it 'm' times you will ahave an efficency of m * O(n) or ... O(m*n).

Easy!



So how do you improve your effiency?

Apart from having a good look to see if you can improve your algoarithm it is now possible to get things done quicker if they are done in parallel.
If this is done by multiple machines it is known as "Scaling Out" or if it is done by increasing the power of your machine then it is known as "Scaling up" (also known respectivly as Scaling Horizontally or Scalling Vertically).

But if you add twice the threads doers it get twice as fast?

If you read the article on WIKIPEDIA (http://en.wikipedia.org/wiki/Scalability), you will reach the part on Amdahl's law.

This gives a nice equasion:
Where α (Alpha) is the percentage of the process that can be done in parralel,
While P is the number of processors.

The equation has the following properties:
  • As  α tends to ZERO ... so nothing can be done in prallel ... the function tends to 1.
  • As α tends to ONE ... so it can all be done in prallel ... the function tends to P. 
In the first case no number of extra processors will help you ... while in the other it is scalable and adding more processors will increase efficency.

The worked example on Wikipedia gives an case where 30% of the code ca be done in parallel and compares 4 vs 8  processors increasing efficency by 20% only!

The conclusion to all this is unless software devlopers improve their code efficency we are all just going to have to throw more 'tin' at the problem and only get a minor improvement for out money.



I just thought this was fun:

Vampirestat.com

Vampirestat domain and site appraisal!

This site is a full featured Website Statistics and domain appraisal service. You can easily check out your own website value in seconds. Our software collects information about the domains from various sources and makes it available to the users. Try and look up your domain right now.
 

Monday, 27 January 2014

Cool tip of the day: gource

I have the funny feeling that this is one of those tools that could be really useful if you could think of a sensible way to use it.
It produces an interesting visualisation of the changes occurring in your source control system.

It works with SVN & GIT but I have only seen it work so far with GIT and that was with a team of 3.
With that set up we were able to visualise the scope of work performed over the two week sprint as part of our sprint-retrospective.

Gource is a software version control visualization tool.
See more of Gource in action on the Videos page.

Friday, 17 January 2014

Class package naming and a more pragmatic approach to MVC.

During a trawl of posts on a software development postings on LinkedIn (or some where) I stumbled across a blog post that set me thinking.
A typical collaboration of the MVC components
We have all be come very familiar with the concept of Model-View-Controller (MVC), that I think as developers we don't give it any more thought and when it doesn't really fit what we are doing we blithely continue.
So after some thought I thought I would annotate some of my current thoughts.
Microsoft tinkered with the pattern when they talked about MVVM and MVP but they are basicly just variants on MVC and not clearly adopted outside their software.

So the alternates with my favourite last:

Business Process and Data with MVC (Model-View-Controller)

In her blog Lea Hayes talks about using MVC but adopts the concept of splitting the model part into a business process and data [See blog].
This is a nice idea as it shows how to consider a controller that actually does a task instead of just going directly to a view.

Unfortunately this approach does not absolutly resolve all the issues as it relegates the controller to the role of "postman" or router. It also does not address what happens when the service has changed the data. 
It also has no good monika to remember it by ... MS-VC ... nope!
Lea Hayes included a link on her blog to a second concept...

'VESPA': A better MVC


This post on a further varient from Bennett McElwee's blog to me offers more promise as sooner than redefining the paradigm as MVVM and MVP do, it adds definition to the elements of MVC.  I agree largely with VESPA but I have only one refefinition (see later).
The Original VESPA pattern
What VESPA does is to refactor sooner than to redefine the MVC pattern.
The reasoning is that as a designer you can communicate the MVC pattern to your developers but you may need to add some definition to how they start coding.
Essentially you break the M & C into four parts and effectively create a M=(SE) & C=(AP) which gives us SE-V-AP ... or rearranged as VESPA so you can communicate it.
But what are the parts:

The Model becomes "Store" and "Entity"

In MVC data is still stored and used on a view but there may not be a direct 1-1 mapping between stored data and the entities shown on the views. Which is where the MVVM comes in.
In VESPA your Store classes deal with persisting your data and contain any business logic on how to combine elements.
The Entity classes are those that are presented on the view but are not stored, they are however responsible for collating Store objects into a usable form.
An example of StoreB objects could be a set of JAXB objects talking to a web service, while the Entities  are a set of POJOs that are used to place a degree of separation between the application and the data sources.

The controller becomes "Action" and "Presenter"

In MVC the controller is still in charge but there are two variants.
There is the Presenter which is the simple "mail-man" controller I spoke of earlier. It just finds the data entities and passes it to the view.
The other type is the Action, which is called "Actor" in the original VESPA but for me this conflicts with the Actor term from UML [See]. It is also similar to the "Service" Lea Hayes speaks about, but I think it is more of a controller concept.

My input to VESPA

My input to VESPA
I agree with the original concept of VESPA 95% and I am adopting it into my projects.
My differences from the original are:
  1. Its Action not "Actor";
  2. The Action and Presenter only deal with the Entities.

Class packages and VESPA

Finally being a bit anal about package naming I would break down the classes into the following packages so you can find them.
Using a base package of mcnought.myproject you get:
  • Actors are in mcnought.myproject.controller.actor;
  • Presenters are in mcnought.myproject.controller.presenter;
  • Stores are in mcnought.myproject.model.store;
  • Entities are in mcnought.myproject.model.entities;
  • View are in mcnought.myproject.view.
I hope this helps you.