Father Mode < ON >: the Double Life of my Tablet

This week’s guest blogger is Fabrizio Volpe, who is an experienced Network Architect with a focus on security and unified communications. He is a four time Microsoft MVP and the author of Getting Started With FortiGate and Getting Started with Microsoft Lync Server 2013. A free Lync e-book he published, Microsoft Lync Server 2013: Basic Administration, has been downloaded more than 2,800 times in less than four months. He is also an avid blogger. Contact us if you would like to be one of our guest bloggers.

While my laptop is (still) an object dedicated to my working activities, during the last couple of years my tablets lived what we could call a “double life”. I haveFabrizio a child of about four years and, although I always prefer take him for a walk in the park or play with him with more “classic” toys, games on a tablet are a great help when I need a few minutes of peace and quiet. The very wide range of educational games available also mitigates my guilt: the time spent with the aforementioned games is not totally useless for my child. On the other hand, the multimedia capabilities and the flexibility of this kind of device granted to it a space in my spare time too, with needs completely different from those of my son. This daily change of use for the same apparatus brought me to make some reflections on the current tablet market and related operating systems. And I had also to revisit some “paradigms” that I gave as given a few years ago. Let’s start profiling my expectations in the two different scenarios.

Father Mode <ON>: my Child is the King

Manage a child with a tablet stresses some aspects of the operating system that are less noticeable when the user is an adult. In my case, what I want is:

  • a wide range of educational games (if they are free, better)
  • an operating system that is as difficult as possible to compromise
  • capability to quickly isolate the device from the Internet and from external devices
  • apps (games) easy to install and uninstall leaving few or no footprint on the device

Father Mode <OFF>: I am the King (With the Permission of my Wife)

Some activities (work on documents and texts, access to certain types of information) are limited to the world of laptop / desktop. A possible exception would be a Surface Pro tablet but, borrowing the definition of a friend of mine, it is like “a laptop with an interesting form factor”. It is a device I would not give to my son, like a laptop. So what I expect from my slice of tablet time ?

  • easy access to e-mail and social networks
  • high quality multimedia capabilities, both for online files and for local contents
  • e-books reading
  • a few apps related to everyday life like password management, calendar and so on

We Are All One Big Family

Surely the lists drawn up a few lines ago are very personal. Nevertheless I am sure that in all houses the scenario is the same, perhaps with multiple devices at the disposal of the members of the family. First thoughts that comes to me is that a paradigm that was true until a few years ago, now has absolutely changed. Windows is no longer the most user friendly operating system, at least in the world of tablets. Windows 8 in a “full” version has certainly a level of complexity that we have not in the two main competitors (Android and IOS). While there is a large number of “standard” programs available, the Marketplace that should give apps optimized for a tablet experience continues to be, in my opinion, an Achilles’ heel, with a limited choice and the best software almost always requiring a fee.

Windows RT: a Missed Chance

The latest news regarding the other Microsoft operating system dedicated to tablets, RT, say that it could be facing a short life expectancy. Although the fragmentation in many operating systems (including various versions of Windows Phone) has certainly not helped in establishing Microsoft in this field, RT could have been a viable alternative to the competition. It is more “closed” but also more “robust” and that is what I need, especially when I am in “father mode”. In my opinion, to kill this version of Windows is a big missed chance.  I also have to say that these sudden turns and abrupt changes of mind regarding the operating systems, with scenarios changed in a few years (or months) will not help to motivate programmers to grow the supply of apps, making the problems related to the Marketplace I have already mentioned most serious. What has been said also eliminates another old paradigm: the fragmentation of versions is no longer exclusive to Linux.

And Then There Were Two

The two remaining systems (IOS and Android) are both able to cover all the needs I have. Although they are distinct from issues relating to costs or to choices based on personal tastes, they are definitely suitable to do what is required for a device (the tablet) which is now part of the everyday life of many families. One thing that impressed me very much, is that often we see that the app “version” of some software that is more performing of the PC version (not to mention the many apps that simply do not exist outside the two operating systems just mentioned). In my list of paradigms waned, even this one has its own weight. Or I am the only one to remember that the richness and choice of programs is one of the success drivers of an operating systems?

Show your knowledge by going to IT Central Station and sharing your experiences with other IT decision makers. Read reviews and post your own reviews of solutions you have experience working with.

Big Data and the Madness of Crowds

This week’s featured reviewer is Martin Butler. Martin is best known as the founder of the Butler Group which was Europe’s largest indigenous IT analyst firm until its acquisition by Datamonitor in 2005.  He is also the founder of Butler Analytics and an expert reviewer on IT Central Station. Contact us if you would like to be one of our guest bloggers.

“We find that whole communities suddenly fix their minds upon one object, and go mad in its pursuit; that millions of people become simultaneously impressed with one delusion, and run after it, till their attention is caught by some new folly more captivating than the first.” Extraordinary Popular Delusions and the Madness of Crowds.

ADD_butlerThat the IT industry is primarily driven by fashions is fairly obvious. Even MIT Sloan Management Review published an article about the career benefits of becoming a dedicated follower of IT fashion. Big data is the latest IT fad to get the fashionistas drooling at the mouth, and as with all IT fashions, some organizations will look much better and some much worse for adorning themselves with this latest garment. But at a time of crowd madness such subtleties become largely ignored.

Just to level the playing field, here is a a quick overview of what big data is about. Commentators are forever telling us that the quantity, diversity, velocity and volatility of data are rapidly increasing – yawn. Yes we all know this, and some bright spark contorted the issue sufficiently that five words beginning with ‘V’ could be used to explain this phenomenon. But let’s not go there. The traditional relational database does a lot of work making sure things hang together (not being able to delete a customer record while there are still open orders referring to that customer for example). Various types of integrity are maintained and because database has been traditionally used for transactional data we store the details of each transaction as a single unit. For very large amounts of data this is not a good scenario. Scalability is limited by the fact that some central entity has to make the whole thing hang together, and for analytics work the row based relational model is fairly useless. So it has gradually dawned on people that stripping away the overhead of the relational model and doing away with the row based paradigm might be a good idea. The result is big data technology which is characterized by massive scalability and great flexibility. The central construct used in big data is the key-value pair. Instead of storing a transaction as a single record it is broken down into multiple key-value pairs. So if customer Joe Smith has a key of 101234 then we might see several key-value pairs 101234:Joe, 101234:Smith, 101234:50, 101234:New York etc. Using the common key, the details for Joe can be reconstituted if needed, although this really isn’t a particularly efficient thing to do. But if we wanted to total the sales for the current month then we just need to rip down the ‘purchased this month’ key-value pair and total them. These key-value pairs can be distributed over multiple servers with a minimum of centralized control (a job that Hadoop performs). As with all things these benefits do not come for free. All the tying together that relational databases performed now has to be implemented in program code, and complexity mushrooms. In fact big data is so new that we don’t really know how damaging this complexity will be. It reminds me very much of the early days of client/server computing – which was also about distributing stuff that had once been centrally controlled on a mainframe. Disaster stories became the order of the day as system management issues reared their ugly head.

The rational approach to big data, and any other new IT fashion is as follows:

  • If you really, really need big data then by all means go for it. But be aware that this is a high risk route and should be balanced by a solid conviction that the benefits will be higher.
  • If you need big data, but not tomorrow, then by all means prototype. Take your time and let others make the mistakes. This also allows the skills market to mature and the price for such skills to fall. Then do it when prices are low and lessons have been learned.
  • Stick with what you have got if there is no need for big data, and five to ten years down the track when you possibly might need it the technology will be mature, skills less expensive and much smaller risks will be involved.

Please note the use of the word ‘rational’. But generally speaking we are not rational, and most certainly not where IT is concerned. Personal career agendas, emotions, cognitive biases and so on make us anything but rational despite the pretense. So here is what will happen. Big data will be very ‘big’. In a few years from now we will start to see disaster stories emerging, although in reality these will only be the tip of the iceberg, since most IT cockups are hidden from public view. Social networks just make the whole scenario much more likely as consultants, managers and technicians jostle for position.

This is not an anti-big data article. Big data is here to stay, although in typical fashion we overestimate the short term effects and underestimate the long term effects – which will be profound. But that is another story.

Read reviews of Business Intelligence Tools from real users at IT Central Station. See reviews of Microsoft, Tableau, and other BI vendors.


How to Successfully Manage BI Dashboard Projects

This week’s guest post is by Fernando Bustillo. He is a Business Intelligence and Data Warehouse expert and has experience working with SAP Business Objects, Oracle, Teradata and other enterprise solutions.  Contact us if you would like to be one of our guest bloggers.


The fact that many dashboards seem to be a single visual screen or eye candy screen could make one believe that it is easy to manage. This could be the first reason why your project will be unsuccessful: underestimating “the enemy”.  In my broad experience working with BI (Business Intelligence) projects, most of them related with dashboards, I have appreciated the difficulty of building a dashboard that needs to be accurate, useful, functional, with quick answers and eye candy. Dashboards are used as a tool for the managers and executives of the organization, and need to be adapted to their requirements and likes. In this review, I’m sharing the major points to consider at the time to manage a successful dashboard project.

What is a dashboard?

There are several definitions of a dashboard. Stephen Few gave us a good example in the March 2004, article titled “Dashboard Confusion” which appeared in Intelligent Enterprise magazine:

A dashboard is a visual display of the most important information needed to achieve one or more objectives; consolidated and arranged on a single screen so the information can be monitored at a glance.

Typical examples are the simple dashboard of a car and the more complicated of an aircraft. In the case of an organization or a company, we as BI designers have to be open minded because the requirements of a dashboard project normally include:

  • Multiple dashboards, each with its own objective
  • Drill-down ability to analyze information
  • Links to detailed reports
  • Additional functionality: navigation, what-if analysis, ability to share information, internal communication, export data, print, send by e-mail, etc.
  • Metadata information

Car dashboard:

guestBI1Sales dashboard:


Dashboard success factors you have to know

Depending on the dashboard type, you have to pay special attention to different factors. Strategic dashboards are normally used for monitoring the global company’s progress in achieving predefined goals. In this type of dashboard, users are at the top of the hierarchy, so the quality of data is very important. In addition to consolidating the information, it is important to ensure the information is correct and accurate. Nobody wants the CEO consulting the dashboard with incomplete or erroneous information. The information must be complete and valid before it is visualized on the dashboard.

Tactical dashboards are usually used for tracing the trends in relation to company’s goals and initiatives. This type of dashboard normally incorporates three types of data: detailed, summarized and historical. The users normally navigate from the first visual screen to the OLAP (On Line Analytical Process) system to analyze the information and to review detailed reports. This explains the need for a deep functional system with a quick response time, which is a challenge if the amount of data is large.

Operational dashboards are used for monitoring and analyzing the most detailed company’s activities in a given department. Normally this is related with real time or near real time, so decisions about ETL (Extract Transformation Load) process are important. Most of the operational software include their own dashboard modules. The use of this module bypasses the problem of load time, but normally with a loss of functionality. The difference of project development time is very large.

If there isn’t an indicators dictionary, you have to create it. Establishing a precise and accurate definition of metrics and indicators will facilitate the understanding of the dashboard and further promotion. End users have to understand the meaning of the dashboard if they are going to utilize it. Also, the indicators definition has to be accessible from the dashboard.

One needs to ask, is all data available? Very often, strategic data are not saved in any corporate database; instead they are saved in personal documents like spreadsheets and presentations. This presents a difficultly in the process to load the data in the dashboard.

The dashboard needs to be integrated into the organization, so it is important to consider corporative aspects: logos, colors, fonts, menu, etc. Are there similar systems? If so, use a similar look and feel.

What’s better, a very precise classic table with the exact data or the newest graphs full of colors and shapes?


The world of data visualization is changing rapidly, with the trend to use complex graphs and infographic techniques to visualize information in an impactful way. Also the concept of big data is changing our relationship with the world of information. That’s perfect for powerful presentation, but not always for a dashboard. We have to decide in each dashboard the graphic elements that better represent the business event we need to monitor. Speedometers, gauges and meters are a current trend, but they use so much space to represent only an indicator. Pie charts are good for comparison, but they are not precise. Tables are boring, but accurate. Bar and line charts are classic, but functional. A combination of areas, bars and lines on a chart is often a good choice.

The dashboard is not a static screen, since the user must interact with it. Not only print and export, but also select and filter data. Using charts as a filter should be very intuitive. Users expect tables to have drill-down functionality. The navigation has to be clear and intuitive. At any time, users need to know the level of information that they are consulting.

Manage expectations. Because dashboard tools offer more and more built-in functionality, it is common to see users waiting for the latest function they have heard or waiting for drill-down ability that you have not developed. It is important to show examples of other dashboards developed with the same tool. Building prototypes is one of the best ways to manage expectations. It has to be clear how users will interact with the dashboard.

Don’t wait until the end of the project to show the dashboard. Designing a good dashboard is not easy, no matter the experience you have. It is not possible to get it right the first time, so you have to build prototypes and quick developments to validate it with end users. They have to validate navigations, graphs, colors, fonts, data and all the important functions.

Dashboard development can have the potential to never end! This could be a great business opportunity for consulting companies but a headache for the project manager. As long as end users like the dashboard, they often want to make changes to incorporate more information and functionality. At this moment, it is important to have a limited scoped project.

As the world changes, business changes. Dashboards have to change according to business needs. Don’t forget to manage a maintenance agreement to guarantee that the dashboard will evolve according to new requirements.


A dashboard project could be an easy project to manage with few resources in a short time, or a big project that involves multiple resources with different skills: data visualization, business knowledge, database experts, technicians, consultants and managers. We need a good project definition and limited scope for making a realistic plan. To avoid failure with user’s expectations, make prototypes and rapid developments to show preliminary dashboards to the end user. If the project is large, separate it into phases for short-term results.

Read reviews of Business Intelligence Tools from real users at IT Central Station. See reviews of Microsoft, Tableau, and other BI vendors.


At Last B2B is Getting Some of the Social Media Action!

This week’s guest post is by by Marie Wallace. She blogs at allthingsanalytics.com  and you can follow her on Twitter at @marie_wallace.  Contact us if you would like to be one of our guest bloggers. 

Marie Wallace

Marie Wallace

About 18 months ago I wrote a blog post entitled “Wake up Enterprise, the Internet is kicking our ass!” where I was bemoaning the lack of progress companies were making in really leveraging social networking within the enterprise; specifically when it comes to applying analytics on these networks in order to better inform business decisions. Today I’m glad to see that the focus has started to shift and at last we are starting to see a wide range of social solutions which are firmly targeted at the enterprise.

Crowd-sourcing of enterprise product reviews is just one example of this shift with specialized social networks like IT Central Station leveraging community and crowdsourcing to completely transform how companies make product licensing decisions. Making an enterprise decision is a completely different proposition to that of buying a consumer product. Frequently millions of dollars can be at stake since product decisions are not just about the purchasing or licensing costs; these decisions can impact business processes, organizational efficiency, legal, compliance, security, risk, finance, reporting, customer or employee sentiment, etc. So when companies look to get recommendations on enterprise products and services they need to ensure that these recommendations are based on accurate, reliable, and contextually relevant reviews from a review site that they trust.

Companies also need much more granular feedback about a product from many different perspectives in order to accurately align their decisions to the needs of their business; these needs may be characterized by their type of business (retail vs. financial), location (European legislation vs. US), size (SMB vs. multi-national), organizational culture, business processes, products, industry, etc. As crowdsourcing captures more characteristics of the products being reviewed, the people doing the reviewing, and the companies they come from, the knowledge graph becomes richer as does the type of analytics that you can apply.

Now I know you are probably asking yourself “How the heck is Marie going to bring this back around to social analysis?” I know I tend to sound like a scratched record, but I firmly believe that, to borrow a variation on James Carville’s 1992 Clinton campaign slogan, “It’s all about the people, stupid”. Unlike consumer product reviews you cannot consider a review in isolation of the person who gave it. The person may have a close affiliation to the product in question putting a review into question, they may be a competitor which means a negative review has to be taken with a pinch of salt, or they may come from an organization with a very different set of business objectives. For this reason the network (more of a knowledge graph than a pure social graph) is critical in order to capture all these connections and allow you to apply the appropriate analysis.

So we can all agree that individual reviews are totally inadequate and wading through masses of reviews painfully time-consuming; we need reviews to be analyzed and synthesized so that you can get the answer, “which product is best for me?”, without the pain. And since these types of social solutions are capturing a very diverse set of data it allows very personalized recommendations to be generated. I know privacy is a growing concern around social media these days, but that is in fact one of the reasons I really like enterprise (B2B) solutions in that they aren’t trying to grab your personal (non-business) information. They don’t care whether you were partying last night or just broke up with your boyfriend.

Just one final comment, or challenge, that I believe is worth posing is “How do we get people to want to share their data and feedback? This is where I believe community site like IT Central Station can help through providing reputation analysis and allowing this reputation to feed into Internet-level reputation systems. However should we also look to the product companies themselves to step up to the plate? Today most companies like to control customer feedback within their own systems; releasing success stories through Marketing and hiding failures within Customer Support. I think it’s fair to say that this model is crumbling and that social media is giving everyone a voice and incentivizing them to use it as often as possible.

Therefore, should companies be incentivizing clients & partners to share their thoughts on these external crowd-sourcing sites? As the song goes, “if you love someone, set them free. If they come back they’re yours; if they don’t they never were” 🙂

Read reviews of enterprise IT solutions from real users at IT Central Station. See reviews of Server Virtualization Software, BI Tools and other popular categories.

Freakonomics and User Errors

This week’s featured reviewer is Nigel Magson. Nigel is the Founder and Managing Director of Adroit consultancy. He specializes in data application, data management, system design and data analysis. If you haven’t already, check out his in-depth article about KXEN. Contact us if you would like to be one of our guest bloggers.

I’ve reached that age…already.  Finding a university place for my son – yes, a rather disturbing thought, but one that Nigelrekindles memories of my own steps away from home into the world of work. So last week we trouped off to an open day at Bristol University.  In the Wills lecture theatre we listened to what studying Economics at Bristol would be like.  The lecture was punctuated with less than subliminal slides warning “If you don’t like maths, don’t study at Bristol”. That’s ok, Floyd, my son is doing A level maths, and says he enjoys it. The Lecturer picked, Freakonomics – Levitt’s pop culture meets economics treatise – to dissect, and in particular the chapter where Levitt argues that an increase in the abortion rate decreases the crime rate. Soon the various datasets were appearing on the board, and the statistical code which Levitt purportedly used, the associated stats tests, coefficients, p-values etc.. The lecturer exposes the programming and statistical errors and flaws in Levitt’s theory. There is no significant relationship in the data, Levitt got it wrong, concludes the lecturer with a satisfied academic smugness. His theory is bunkum.

As I research afterwards, the lecturer was by no means the first to show this (and he didn’t claim to be). Whilst I’m loving it, I sense my son has drifted off, somewhat phased by the amount of stats and some of the arcane elements. Afterwards, I try to explain one of the projects we’re involved in, income forecasting using a reporting engine we’ve written in VBA on top of Excel and cranking survival curves in a stats engine. He’s not listening (whose son listens to their Dad anyway?), we head off to Modern Languages department…

User errorI reflect. We help clients gain insight from their data, sometimes through doing the work, or setting up training them and helping them run analysis systems such as Apteco’s FastStats, Smartanalyser or SPSS. They also end up making mistakes in their application of the tools, or like Levitt, getting the code wrong. Well, they do. We’ve all seen it, and this can cost their organisations millions in lost opportunities or more likely simply go unnoticed. The human part. So replace user? Not that easy. Positively we’re part of an industry out there to help them maximise their investment, and avoid the errors.

I suspect know many of the clients I have worked with in selecting or even using tools or software have similar reactions to my son – and sometimes that’s understandable (if it’s not their day job and we go “geeky” on them). So now we have IT Central Station – a great place to access and share reviews. As reviewers, our responsibility is to avoid them glazing over, to keep them listening, and as recipients we must try not glaze to listen… that of course, is easier said than done.

Read reviews of Data Mining Solutions from real users at IT Central Station. See reviews of SPSS, KXEN, and other Data Mining products.

Bye Bye DBA – Hello DBMA

This week’s guest blogger is Lilian Hobbs. Dr. Hobbs has spent over 30 years working with database systems. From the early beginnings of a CODASYL database, onto being a part of both Digital Equipment and Oracle database engineering groups working on relational databases and then the new database machines at Vertica and EMC.  Her Phd was awarded by Southampton University for research into automating CODASYL database design and she has written a number of books, some even on databases.  Check out her website at www.database-evolution.com  Contact us if you would like to be one of our guest bloggers. Hobbs

If you needed someone to mange your database, it used to be easy, simply advertise for a DBA (database administrator) and the applications would come flooding in. Today, the database world is changing with the introduction of  database machines, therefore, what you now need is a DBMA (database machine administrator)

It’s interesting how the role of DBMA has arisen. When it was simply database software, no one seemed to expect the DBA to know anything about the hardware. Updating system parameters, checking for errors on a network card or updating some firmware, would traditionally be a no-go zone for the DBA, so what has changed?

Despite the fact that a database machine like Exadata, Greenplum, Netezza is an optimized hardware platform with maybe some special hardware components and special database software, for some reason the system administrators who would have normally managed the hardware, now seem to think that its the DBA’s problem.  In reality, there isn’t anything fundamentally new here, but not all system administrators are keen to manage this new technology, because it’s a ‘database machine’.

What does this mean for the DBA? Well it means that the new DBMA needs to understand the hardware components within the database machine and be prepared to start managing and debugging them. If your Linux skills are a bit rusty, then this is the time to start revising! Courses are available from the vendors to teach you these new skills and you will find tools available, as with database software, to complete the tasks, so don’t be put off by this change.

Therefore, when you install the first database machine in your organization, make sure you allow time for learning these new skills. Discuss with the current system administrators their role with the database machine. You may be lucky and they want to manage it, but be prepared for them to say, that you have to upgrade the firmware on the database machine components and call out engineers when network card fails.

Yes, the landscape is changing slightly with database machines. As a long term software person, you may feel uneasy venturing into the hardware world. Hopefully, you will welcome the escape from just looking at the software and soon you will be wondering what all the fuss was about, and enjoy being a DBMA, which does sound rather grand.

Read reviews of Data Warehousing Appliances from real users at IT Central Station. See reviews of Exadata, Netezza, and other Data Warehousing solutions.

Inspect What You Expect: The Importance of Monitoring

We are once again featuring Eric Evans who is one of IT Central Station’s Expert Reviewers. Eric is a Senior Information Systems and Technology Manager at the 6th Marine Regiment Head Quarters. He has over 15 years of experience working with multi-disciplinary technology strategies and agile information systems management. Contact us if you would like to be one of our guest bloggers.

Recently I had the opportunity to sit down with a few leaders from various disciplines within IT. One of the biggest concerns thatADD_Evans were voiced was the ability to effectively monitor the performance of their enterprise, systems and other assets. Having been in their position, I completely understood their frustration…one that’s played out almost daily in IT.

Yet, even in crisis situations, few IT leaders are able decipher the handwriting on the wall until it’s too late.

When the famous luxury ship Titanic struck the iceberg, it took two and a half hours to sink. But although the hull was ripped and water was rushing into the compartments, when the first lifeboat was launched, there were only 28 people in it, despite its capacity to hold 65. In life – like on the Titanic – there are only a few people who have the ability to discern the meaning of small events and have the courage to make a decision. The rest go down with the ship.

Monitoring performance should be a top priority for any organization because it allows us to make a transparent and objective evaluation of whether our process and project has been a success or not. Enterprise network management and monitoring – it’s not just routers, switches, and firewalls in today’s data centers. More than ever before, managing a network means managing devices and applications on the network. Across the world, network administrators are consolidating physical servers to VMware, monitoring Active Directory performance, troubleshooting VoIP phone systems, and more.

The implementation of a monitoring and evaluation plan will provide us with the information required to evaluate and demonstrate to stakeholders the success of our processes and projects. Communication of a process or project outcomes and success is a fundamental requirement of all processes and projects.

To coin a phrase that’s echoed by DOD leadership, ‘Inspect what you Expect’. In other words successful business management requires the ongoing monitoring of performance in order to generate data by which to judge the success or otherwise of specific strategies. Improvement in performance can only be realistically achieved when management is properly informed about current performance. To this end it is important to identify key performance indicators that enable management to monitor progress.

In our industry, uptime and availability are critical because downed services and systems can put lives at risk, in addition to costing money. It is our job and purpose to make sure that all services are up, running and healthy.

There are many tools on the market for monitoring performance; one that I have relied on heavily for years has been “SolarWinds”.

SolarWinds is a network monitoring tool that offers advanced fault and performance management functionality across critical IT resources such as routers, WAN links, switches, firewalls, VoIP call paths, physical servers, virtual servers, domain controllers & other IT infrastructure devices.

Further the network monitoring software, combines an easy-to-use interface that lets you quickly deploy the product for production and also apply your organization’s monitoring policies across multiple devices quickly.

Being able to quickly deploy a network monitoring tool is especially important to me, since I typically manage at least three networks at a minimum – often times in austere environments. SolarWinds, provide 24/7 network monitoring of devices which notifies my Help Desk when a critical event has occurred. This has allowed my teams identify, isolate and resolves issues/events more quickly.

Because data availability is critical to our decision makers and business operations, my team leverages SolarWinds as a central management platform for tracking all the connected resources on our networks and as an Asset Management capability.

I would be remiss if I didn’t mention the value that SolarWinds brought to our Incident Management program by giving my Help Desk the power to immediately visualize how everything is connected. Enabling, us to deliver high-value, cost-effective IT services that support our global businesses. Solar Winds solutions allow us to maximize our IT investments by protecting our critical data and systems. It empowers us to keep business activities moving forward at all times, helping the organization achieve its goals.

This was instrumental in the prevention of service outages and reduced down times significantly. Additionally our top-level decision makers were able to understand and capitalize on the strategic potential of information technology by integrating it into everything they do.

The business environment today offers few second chances for course correction.

Network uptime is a critical, and it’s the life blood of business operation. Our customers should be able to confidently rely on us to ensure that their mission-critical systems are available and running smoothly 24/7/365. Downtime means a loss of productivity, missed revenue opportunities, and potential brand damage.

Read reviews of Network Monitoring Software from real users at IT Central Station. See reviews of SolarWinds NPM, Wireshark and other Network Monitoring Software vendors.

Defining your Application Strategy

This week’s guest blogger is Sanjeev Gupta. Sanjeev is a consultant and industry analyst with 16+ years of experience. He blogs at www.sanjeevg.com and has posted several Expert Reviews on IT Central Station. Contact us if you would like to be one of our guest bloggers.

After defining the enterprise architecture, defining your application strategy is the next most critical step. However most often PICsanjeevthan not, discussion about the application strategy primarily revolve around adopting the Best of Breed products or a Single Vendor Stack. Think again!! Are these really your Application Strategy?These can be outcome of your strategic initiative, but not the starting point. A more practical approach will be to do an evaluation of applications products against the enterprise requirements and enterprise strategic considerations.

I’m outlining the approach that I follow – something that’s always worked for me. Before starting this, pls ensure that the following are in place:

  • Enterprise Vision & the Business Requirements
  • Enterprise Architecture & Architectural Consideration, standards & guidelines
Once you’ve ensured that the pre-requisites are in place, adopt the following approach .
  1. Individual Product Analysis – as part of this step you should evaluate the products in the individual space (e.g. Portal, ECM, BPM, etc) and narrow down to the at most 2 – 3 in each space. The evaluation should be done with focus on the prerequisites listed above. This is to ensure that the products selected can meet the business requirements and are in line with your Enterprise Strategy.
  2. Solution Analysis – Once you have the individual products identified, work on define a set of alternate solutions – e.g. combination of Portal, ECM, BPM – IBM WebSphere Portal, EMC Content Mgt System, and Savvion BPM. At this stage, you should include solutions of Single Vendor Stack as well as Best of Breed stack.
Pay attention to the fact that some of the products will not jell very well with other – thus when you’re doing the various permutation and combinations, some of the solutions can be outright rejected. Narrow down your selection evaluation to at most 5 to 6 solutions so that post the solutions you either have a clear winner or at most two options to choose from.

Now comes the exciting part of doing the comparative evaluation of the alternate solutions stacks. The three typical areas that I look at are:

  • Functional Requirements of the Enterprise – this forms the core part of the evaluations as the solutions stack is useful only if is capable of delivering what the business wants. Evaluate the stacks purely against each of the high level functional requirements – both current as well as futuristic.
  • Non- Functional requirements of the enterprise – NFR like performance, scalability, costs, time to market etc. should also be given sufficient consideration in the evaluations.
  • Architecture & Strategy considerations – this is the part that most often gets missed out or over looked. A few key points to evaluate are the interoperability of the products in the stack; interoperability of the stack with the existing applications in the enterprise. Be aware that SOA can enable any set of application to integrate with one another but that should not be benchmark for interoperability.                                           evaluation-wave_thumbevaluation-matrix_thumb

The screen shots are from the Evaluation Matrix template that I use for my analysis; you can either create you own or feel free to  email me if you need the soft copy of the one that I use.

The % weight that you give to each of these requirements are totally up to your understanding of the organization priorities and strategy…
Read reviews of enterprise solutions from real users at IT Central Station. See reviews of PaaS, BI Tools and other popular categories.

Application Lifecycle Management is Coming of Age

This week’s guest blogger is Alex Kriegel. Alex is an Enterprise & Data Architect for a state government and has over 10 years of experience in the field. He has written numerous books on relational database technologies and is also an Expert Reviewer on IT Central Station. Contact us if you would like to be one of our guest bloggers.


Software intensive systems have a lot in common with humans – they are born, mature and die. They sometimes even come back from the dead or simply linger around scaring the day lights out of everyone who comes in contact with them. To minimize one’s chances of inadvertently releasing such monsters into the wild one should adopt a holistic point of view – that of Application Lifecycle Management.

Wikipedia defines ALM as “… a continuous process of managing the life of an application through governance, development and maintenance. ALM is the marriage of business management to software engineering made possible by tools that facilitate and integrate requirements management, architecture, coding, testing, tracking, and release management.” Can’t say it is a novel concept – every organization is already doing all of these either by design or by accident, with the majority falling into the giant “in-between” void. The key is tight integration between three key areas – governance, development and operations.

To address the issues, many vendors came up with ALM tools – sometimes bundled, oftentimes integrated, into a suite. Wikipedia lists over 40 “products” ranging from full-blown suites to a assembly of specific tools, both commercial and free open source. Gartner’s MarketScope mentions 20 leading vendors with ALM suite offerings, out of which 8 got “Positive” rating, including IBM’s which got the only “Strong Positive”. The Forrester’s Wave for ALM lists 7 vendors in the “strong” segment, with additional marks for market presence (with IBM, HP and Microsoft leading the “big” guys and CollabNet, Atlassian and Rally Software leading the smaller vendors).

The ALM offerings differ in degree of completeness, degree of coherence between the tools and extensibility model provided. Some of the more integrated offerings come in a variety of flavors such as SaaS or on-premises installations, with numerous options to complement either one. And then there is a price tag to consider, which as with everything that purports to address enterprise-wide issues, is not insignificant – ranging from tens of thousand of dollars to a couple of millions (and then some) with additional costs for infrastructure, operations, and maintenance. Still, there is a solid evidence that these investments under the right circumstances might and do pay off. Implementing ALM principles to the Enterprise Integration and/or software development project can significantly improve quality of the delivered system and positively affect schedule.

The ALM processes fall into 5 domains:

  1. Requirements Definition and Management
  2. Quality and Build Management (including test case management)
  3. Software Change and Configuration Management
  4. Process Frameworks and Methodology
  5. Integration Across Multiple AD (Application Development) Tools

An integrated suite with a hefty price tag must address all of these domains to be worth consideration; and for the best of breed route, integration considerations are of paramount importance in order to realize ALM potential. One such important consideration, for example, is integrated QA (either that or the ability to integrate with a QA suite).

So far, only two vendors today offer fully integrated (all 5 domains), end-to-end, technology neutral ALM suites – IBM Rational Jazz Platform and HP ALM. The rest is either very technology specific (such as Microsoft TFS) or stops short at providing some vital functionality (e.g. the issue and project tracking Jira does not address requirement management while FogBugz does, and neither comes close to providing test management functionality; both provide robust extensibility models to amend this with third party integration). We are going to elaborate on the selection criteria and process of “the best-fit ALM” solution in follow up posts.

Read reviews of Application Lifecycle Management Suites from real users on IT Central Station. See reviews of Microsoft, HP and other ALM vendors.