Which technology, languge to support this project:

Which technology, languge to support this project:

Author
Discussion

mystomachehurts

Original Poster:

11,669 posts

252 months

Thursday 20th December 2007
quotequote all
Hi, I'm after a bit of advice, thoughts and opinions on how to addresss this opportunity.

I've been maintaining a legacy Visual J++ Application for the last ten years and I've been given the go ahead to write a proposal for a complete rewrite. woohoo

The time scales are short and I'd need to be delivering this by May.

Background

The Current Application (CA) sits in the centre of a network and allows a user to create Unit Tests (UT), string this all together into a sequence to create Test Schedules (TS). The TS can then be strung together to create a Test Plan (TP).

User selects a TP or TS, clicks run and it goes off and runs all the Tests and reports back on the result.

There are facillities to branch to different UT based upon the result of a previous UT.

Currently the majority of the UT are run by connecting to a bespoke Remote Tester (RT) that actually performs the test. The interface to the RT is usually a Telnet or HTTP style interface.

A simple example would be a test that maybe does an FTP transfer, and could involve the following steps where the Central Test Engine would

1. Connect to the Server
2. Start the FTP Server on the Server Host
3. Connect to the Client
4. Start an FTP Client
5. Connect the Client to the FTP Server
6. Issue a Username and Password
7. Issue a Get Command
8. Wait for the Transfer to Fail or Complete (branch accoringly)
9. REcord throughput etc.
10. Disconnect the Client from the Server
11. Stop the FTP Server
13. Disconnect from the Server Host



Proposed Soultion and Requirement

  • We will put all configuration paramters for UT, TS & TP into a DB - currently they are serialized Java Objects.
  • We require a Central Test Engine that can read a TS or TP from the Database.
  • The Test Engine must be able to run a UT, TS, TP and detect Passes, Failures and branch test flow accordingly.
  • The Test Engine must be able to run many UT in parallel starting and stopping them according to the behaviour of other UT
  • The Test Engine must be able to stop all UT in certain cases of failure
  • The Test Engine must be able to start certain UT when certain other UT complete

Thoughts

  • Find an off the shelf Scriptable Test Engine that can be made to connect to our exiting RT and issue commands and record response (We have our own Statndard Interface for this, but the content varies depending on the test being run - ie if we are running a Real Time Protocol Test we might record Jitter, if we run a ping test then we might only record Min, Max and Avg Response Times). So some intelligence will be required at the Test Engine to accomodate this and the recording of results.
  • Our RT probably won't change as it contains bespoke protocol stacks that we can drive in unusual ways.
  • Python and Twisted-Python have been mentioned but I can't find anything on developing a Scriptable Test Engine.
  • I'd rather spend my time working on the DB as not only is it necessary to store all the tests and results in here, but we would like it to include a Requirements Tracking system such that we can link tests to Requirements and get some test coverage metrics.

Anyway over to you lot.

GreenV8S

30,269 posts

286 months

Thursday 20th December 2007
quotequote all
You don't say anything about the scale of the system, or what form the UI will take, so I'm making lots of assumptions.

Java serialisation is a disastrous solution for long term persistence of configuration data. Just don't, ever. Depending how complex the test schema is, storage in a relational DB might work but that could be quite limiting unless you go for an object/relational mapping (which begs the question why are you using an RDB?). Since you're clearly comfortable with Java syntax I'd suggest a slightly different architecture which I think you might like.

I suggest using a JScript engine to run the tests, and a (separate) HTML/JScript client to configure them. Obviously within JScript you can use your Telnet/HTTP protocols but you can also do a lot of stuff directly if you choose, and of course you have database, COM, WMI etc at your finger tips. I would store the tests themselves in text files using JSON encoding, or you could use XML if you prefer language neutral storage. You can store them in an RDB if you prefer but I don't see any significant benefit. A technique that has worked well for me is to provide two sets of JScript classes: one that implements a DHTML client to edit the script and one that executes the script. So evaluate the JSON within the client to render the editor, the editor emits the JSON representation for you to save somewhere, evaluate that within the runtime engine to instantiate the rule set that executes the script. You would need to decide whether you're going to implement the thin client as a standalone HTML application or a servlet, and I think that mainly comes down to centralisation/sharing of the scripts.

mystomachehurts

Original Poster:

11,669 posts

252 months

Thursday 20th December 2007
quotequote all
GreenV8S said:
You don't say anything about the scale of the system, or what form the UI will take, so I'm making lots of assumptions.
True sorry, was in a bit of a rush.

Scale is not too big, there's probably less than 40 components in the entire system under test.

UI will likely be a web browser for two main reasons:
1. It means the system can be managed from any location
2. No distribution of executables after a system upgrade/modification

GreenV8S said:
Java serialisation is a disastrous solution for long term persistence of configuration data. Just don't, ever.
It worked very well for a long time, but it's not scaling very well now.

GreenV8S said:
Depending how complex the test schema is, storage in a relational DB might work but that could be quite limiting unless you go for an object/relational mapping (which begs the question why are you using an RDB?).
Mainly because say we have a bunch of unit tests and the paramters that these tests need to run, Usernames, Password, IP Addresses etc, we can keep all the test paramters in one place and then when we create a Unit Test we point it at a set of paramters to be used when the Test is run.

Additionally we can audit any changes, so if someone create, modifies, or deletes anything with reference to a test or it's paramters we can stick a change note into the DB and track them.

Next if someone wants to create a new test, we can present options for populating paramters of that test taken directly from allowable paramters mapped in the DB


GreenV8S said:
Since you're clearly comfortable with Java syntax I'd suggest a slightly different architecture which I think you might like.

I suggest using a JScript engine to run the tests, and a (separate) HTML/JScript client to configure them. Obviously within JScript you can use your Telnet/HTTP protocols but you can also do a lot of stuff directly if you choose, and of course you have database, COM, WMI etc at your finger tips.

I would store the tests themselves in text files using JSON encoding, or you could use XML if you prefer language neutral storage.
You can store them in an RDB if you prefer but I don't see any significant benefit.
Text files (not disimilar to serialised Java objects) make change control a nighmare, also it means newbies have to learn the file format to create & edit tests.

Previously we had pretty-clicky Java forms, supporting drag and drop and lots of rich GUI stuff for creating tests, once the test was created the Java Data structure was serialized to disk - purely to remove the need of translating input into text and then translating text back into a Java Data Structure.

So a Form on screen would know how to create a Java Test Object based upon input provided by the user, The Test Object knew how to save itself to disk, and the GUI could load an object and query it for information that needed to be presented on the Form to the user.

Whe nthe test needed to be run, the User would select it, and the run-time engine would load the object and ask it to perform it's task. (essentially like calling public void Run). The run-time engine provides a standard set of interfaces for supporting some of the tasks that the test object might need to perform (logging, raising alarms etc).

Additionally the Test Object would provide a set of interfaces such that the run-time engine could perform actions on it, (stop the test object, query it's progress/state etc) - these were made available to the User via the run-time gui.

GreenV8S said:
A technique that has worked well for me is to provide two sets of JScript classes: one that implements a DHTML client to edit the script and one that executes the script. So evaluate the JSON within the client to render the editor, the editor emits the JSON representation for you to save somewhere, evaluate that within the runtime engine to instantiate the rule set that executes the script. You would need to decide whether you're going to implement the thin client as a standalone HTML application or a servlet, and I think that mainly comes down to centralisation/sharing of the scripts.
Previously I only gave a very brief example, let me explain a more concrete example.

We emulate and test satellite systems in the lab, these comprise

  • Real application clients
  • Software emulated data-capable satellite terminals
  • Software emulated satellite (ensures correct latency on packets on the network)
  • Real satellite base station
  • Real Authorisation, Authentication and Accounting functions
  • Real Application Servers
Each application client and software satellite terminal runs on its own host, and everything else is also on its own host.

We have a host for creating, managing and running the tests from.

A real world test scenario would be test the outcome of 20 sat terms connecting through the satellite and running a 1 MByte FTP Download. (This is possible one of our simplest tests), it would be executed as follows:

  • User selects the test and clicks run
  • The runtime engine is started and loads the test fr omdisk
  • The test is started
  • The first unit-test is loaded - this UT needs to create the emulated satellite terminals. This UT needs to create 20 sat-terms, but the paramters for each one are slightly different - they use IMSIs just like mobile phones, they need to reside on different hosts, so we need different IP Addresses to connect to.
  • The UT is passed a set or paramters for each sat-term we need.
  • The runtime engine is instructed to run this initialisation in parellel
  • Let's assume it works and the test proceeds.
  • Next we get the terms to activate a PDP Context
  • So we start the next UT - but only because the first UT has passed (worked)
  • The runtime loads the next UT in the Test Schedule which knows how to activate PDP Contexts - we pass it the paramters for the 20 sat-terms we have just created.
  • It begins to execute the PDP Context Activations at the remote sat-term hosts.
However we know that there is a bug in the system that means we can only activate 10 at a time - so we set a concurrency limit on this task to 10. As each UT succeeds we start the next one. (Later when the bug is fixed we will allow it to activate all 20 in parallel - and we will mark this change in the Audit Log in the DB and tie the change to a Bug report in the DB)
  • So eventually we have 20 sat-terms connected and we can begin to try our FTP.
Interestingly what we want to know is that if 15 sat-terms are runnning heavy FTP downloads, what happens if another 5 start up?

  • So we get the run-time engine to load up and the next UT - the FTP download - this UT is only aimed at the first 15 sat-term Clients.
  • We configure it to run the first 15 in parallel
  • UT (they are intelligent don't forget) connect to the hosts runningthe sat-terms and start up an FTP Client
  • They all start and begin downloading - each notifying the run-time engine that they have started
  • The test engine notes they have started sucessfully and launches the next five FTP downloads in parallel
Let's consider a failure now - one of the FTP last five doesn't work - we get a fail.

! No point in allowing this to continue - we proceed with the following:

  • Log the failure
  • Stop all running FTP downloads
  • Get all the sat-terms to disconnect ready for the next test (Logging more errors and branching accordingly if necessary
  • Run a whole heap of other UT that will check the network is in a state where we can run the next set of tests.
As said, this is about the simplest scenario we have been working with. Some of them are monumentally complex.

So, I want a test engine framework that will allow us to proceed in a fashion where we are addressing:

  • Requirement covereage of the tests
  • Change Management of entities within the system - e.g. new sat-term software emulator
  • One click - run all tests

The problems that the boys have got at the moment is mainly change and requirement management, the problem I have is an over developed/bloated/hacked test script engine that needs replacing. Our requirements are along the lines of above.

As you can probably tell, we already have quite a powerful and intelligent script-engine/ test editor, but it's all in VJ++ and I want to move it else where.

I've been looking at the Eclipse Test and Performance Tool Platform which looks very promissing.

Anyway Peter!
Many thanks for taking the time to write your post.
I have to nip out now, but look forward to later seeing what others think.

best
Ade




ETA a bit of clarification.



Edited by mystomachehurts on Thursday 20th December 17:35

GreenV8S

30,269 posts

286 months

Thursday 20th December 2007
quotequote all
Just to respond briefly to a couple of points you raised:

mystomachehurts said:
It worked very well for a long time, but it's not scaling very well now.
The problem is not scalability, it's the fact that there's b*gger all support for versioning and you can end up wasting a lot of time adding (and then testing/fixing) support to make your new classes backwards compatible with the older serialised object streams. To add insult to injury the encoded representation is extremely difficult to deal with except by reading it back into Java so you have effectively no diagnostic access to see what's wrong and repair it.

mystomachehurts said:
Text files (not disimilar to serialised Java objects) make change control a nighmare, also it means newbies have to learn the file format to create & edit tests.
I didn't mean that the user ever deals with the textual representation directly. The GUI would enable them to contruct the script interactively and configure parameters etc. The XML/JSON/whatever representation is just the format used to store it. All of these formats have the advantage of being human-readable (and human-fixable, if necessary) via any handy text editor but no user would normally deal directly with the textual representation. If change control was significant to you, these formats are all handy for whatever text based source control system you care to use.

mystomachehurts

Original Poster:

11,669 posts

252 months

Thursday 20th December 2007
quotequote all
GreenV8S said:
Just to respond briefly to a couple of points you raised:

mystomachehurts said:
It worked very well for a long time, but it's not scaling very well now.
The problem is not scalability, it's the fact that there's b*gger all support for versioning and you can end up wasting a lot of time adding (and then testing/fixing) support to make your new classes backwards compatible with the older serialised object streams. To add insult to injury the encoded representation is extremely difficult to deal with except by reading it back into Java so you have effectively no diagnostic access to see what's wrong and repair it.
Sorry, misunderstood you - I completely agree, we encountered that problem a few times and there was some very ugly code involved to fix it.

GreenV8S said:
mystomachehurts said:
Text files (not disimilar to serialised Java objects) make change control a nighmare, also it means newbies have to learn the file format to create & edit tests.
I didn't mean that the user ever deals with the textual representation directly. The GUI would enable them to contruct the script interactively and configure parameters etc. The XML/JSON/whatever representation is just the format used to store it. All of these formats have the advantage of being human-readable (and human-fixable, if necessary) via any handy text editor but no user would normally deal directly with the textual representation. If change control was significant to you, these formats are all handy for whatever text based source control system you care to use.
Gotcha!

Thanks for the reply!

cyberface

12,214 posts

259 months

Thursday 20th December 2007
quotequote all
Ruby, Rails, MySQL and XML hehe

mystomachehurts

Original Poster:

11,669 posts

252 months

Thursday 20th December 2007
quotequote all
cyberface said:
Ruby, Rails, MySQL and XML hehe
rofl

How's it going fella? I've been meaning to drop you an email for a while now. You know how it is. Hope all is well with you and your's.

cyberface

12,214 posts

259 months

Thursday 20th December 2007
quotequote all
mystomachehurts said:
cyberface said:
Ruby, Rails, MySQL and XML hehe
rofl

How's it going fella? I've been meaning to drop you an email for a while now. You know how it is. Hope all is well with you and your's.
Hanging in there mate. Having to put up with a client who's a Microsoft shop through and through wink but doing fixed income attribution so career is 100% back on the rails. Shattered but enjoying it.

Hope you're doing well - sounds like you've pulled another project out of the bag so good luck with it! I'd seriously look at the scripting languages (I'd obviously recommend Ruby, but Python is just as good) and their associated web frameworks if you're on crazily aggressive time schedules and you don't have a large team of developers.

You can get a lot more done with fewer lines of code with the modern scripting languages and less code == fewer bugs, and is quicker to write as well.