Continuous Compliance: How to Combat Regulatory Fatigue

Guest post by Reuven Harrison CTO and Co-Founder, Tufin

Whether it’s protecting consumer credit card numbers, a company’s intellectual property, or a patient’s medical records, most of the government and industry regulations in place today were designed to protect the privacy and safety of people, as well as valuable applications and data. Given the escalating global problem with privacy and security, these regulations were needed. However, the downside of this is that enterprises must now operate under the requirements of multiple regulations and security standards. What we’re seeing as a result is something I call “regulatory fatigue,” where enterprises face a jungle of constraining regulations that ultimately inhibit their agility and productivity.

For many of our customers, the compliance burden is growing annually, but the budget for supporting it is not. There can be several audits per year for separate regulations such as PCI DSS, SOX, and so on. Additionally, it’s becoming more common today for business partners to require a controls assessment before entering into a services contract. Unfortunately for many companies, manual processes remain prevalent. For example, many compliance managers are still tracking their organization’s regulatory status in a manual spreadsheet, increasing their exposure to risk and even hefty compliance-violation fines.

Enterprises can reduce their regulatory fatigue and maintain their agility by shifting their approach to one of “continuous compliance.” That is, attaining a state where all compliance requirements are met, and then continuously maintaining that state. It’s easier and less time-consuming than the traditional “snapshot-in-time” approach. And when continuous compliance is achieved by automating policy violation alerts, remediation efforts and change processes, it becomes even more efficient and controlled, avoiding the delays and mis-configurations often associated with manual procedures.

Our experts have put together a survival guide to help CISOs, CSOs, Chief Compliance Officers and other stakeholders who must ensure regulatory compliance within their organizations. This guide walks through some of the key regulations in every industry, and gives detailed steps on how to adopt the continuous compliance approach.

If you’re ready to put an end to regulatory fatigue, download the free compliance survival guide today.

Understanding The Layers Of Hyper-Converged Infrastructure

Guest post by Michael Haag, Product Line Marketing Manager in the Storage and Availability Business Unit at VMware.

We’re almost half way through 2016 and it continues to shape up to be the year of hyper-convergence. Combine faster CPUs, lower cost flash (with exciting technologies on the horizon) and software innovation with the majority of data centers using server virtualization, now is the time to extend existing infrastructure investments with newer, modern solutions.

Three months ago, VMware introduced Virtual SAN 6.2 and gave this hyper-converged infrastructure (HCI) stack a name: VMware Hyper-Converged Software (VMware HCS). Virtual SAN 6.2 introduced a major set of new features to help improve space efficiency and management (check out the What’s New in 6.2 blog for those details). The latter is a marketing name to help us refer to the software stack of Virtual SAN, vSphere and vCenter Server.

With all the various terms and names being used to refer to HCI and the components, I want to take a few minutes to help clarify the terms we use at VMware and break down our view of HCI.

Does Virtual SAN = HCI?

Short answer: no. We sometimes use HCI, VMware HCS and even Virtual SAN in similar ways to refer to a solution where compute and storage functions are delivered from the hypervisor software on a common x86 platform (i.e. HCI). While all those terms are related to HCI, they refer to specific components or groups of components that make up a full hyper-converged infrastructure solution.

It’s important to understand that Virtual SAN on its own is not hyper-converged infrastructure. Virtual SAN is software-defined storage that is uniquely embedded directly in vSphere. Virtual SAN refers to the software that virtualizes the compute layer by abstracting and pooling together the direct attached storage devices (SSDs, HDDs, PCIe, etc…) into shared storage.

Because Virtual SAN is so tightly integrated with (and dependent on) vSphere, whenever you talk about running Virtual SAN, the assumption is the compute virtualization piece from vSphere is there too.

Similarly, vSphere with Virtual SAN requires hardware to run it—as someone reminded me recently, software without hardware is about as useful as an ejection seat on a helicopter (think about that one for a sec if needed).

vmware blog image 1

As the image shows, HCI refers to the overall solution that includes two major components: hyper-converged software and industry-standard hardware. Without both of those pieces, you do not have HCI. From VMware, our software stack is VMware HCS, but that stack can look different for different vendors.

VMware has a unique advantage in that VMware HCS is a tightly integrated software stack embedded in the kernel and is the only vendor that provides such level of integration.

This architectural advantage delivers a number of benefits including: Performance, simplicity, reliability and efficiency.

Do all HCI solutions look the same?

While all HCI solutions generally follow this blueprint of having a software stack built on a hypervisor that runs on industry-standard hardware, in the end they can look very different and can have varying degrees of integration.

All HCI solutions generally follow the same blueprint outlined above. They start with server virtualization (some hypervisor, which is more times than not vSphere) and then add in software-defined storage capabilities, which can be delivered tightly integrated like Virtual SAN or bolted on as a virtual storage appliance (separate VM on each server). That software is then loaded onto an x86 platform.

Some vendors package that together into a turnkey appliance that can be bought as a single sku, making those HCI layers less transparent and the deployment easier. One example of that type of HCI solution includes VCE VxRail HCI Appliance (which we’ve done with EMC) and is built on the full VMware HCS stack.

VMware HCS also offers you the ability to customize your hardware platform. You can choose from over 100 pre-certified x86 platforms from all of the major server vendors. We call these hardware options our Virtual SAN Ready Nodes.

An advantage to the Ready Node approach is that you can choose to deploy hardware that you already know. Equally important, but often overlooked, is that the relationships that you have with a partner or vendor, the procurement process you have in place and the support agreements with your preferred server vendor can all be leveraged. No need to create new support and procurement silos. No need to learn a new hardware platform including how to manage, install and configure it.

You can also read unbiased VMware Virtual SAN reviews from the tech community on IT Central Station.



What’s All the Fuss About Hyper-Converged Infrastructure?

Guest post By Anita Kibunguchy – Product Marketing Manager, Storage & Availability, VMware

Technology has made it so easy that customers looking to purchase a product or service need to simply look online for reviews. Did you know that 80% of people try new things because of recommendations from friends? It’s the reason why e-commerce companies like Amazon have thrived! Customers want to hear what other customers have to say about: The product, their experience with the brand, durability, support, purchase decisions, recommendations … the list goes on. This is no different in the B2B space. That is why IT Central Station is such an invaluable resource for customers looking to adopt new technologies like hyper-converged infrastructure (HCI) with VMware Virtual SAN. Customers get a chance to read unbiased product reviews from the tech community which makes them smart and much more informed buyers.

What is HCI?

Speaking of datacenter technologies, am sure you’ve heard about hyper-converged infrastructure as the next big thing. It is not surprising that according to IDC, hyper-converged infrastructure (HCI) is the fastest growing segment of the converged (commodity-based hardware) infrastructure market which is poised to reach $4.8B in 2019.

Hyper-Converged Systems

The top-level definition of HCI is actually quite simple.  HCI is fundamentally about the convergence of compute, networking and storage onto shared industry-standard x86 building blocks.  It’s about moving the intelligence out of dedicated physical appliances and instead running all the datacenter functions as software on the hypervisor.  It’s about eliminating the physical hardware silos to adopt a simpler infrastructure based on scale-out x86.

Perhaps more fundamentally, it’s also about enabling private datacenters to adopt an architecture similar to the one used by large web-scale companies like Facebook, Google and Amazon. HCI is by no means confined to low-end use cases like ROBO and SMB (although it does great there too). The real promise of HCI is to provide the best building block to implement a full-blown Software Defined Data Center.

When thinking about HCI, hardware and software are fundamental to this new infrastructure.

  • Hardware: HCI includes industry-standard x86 systems that can be scaled up or out. Almost like small lego bricks stacked together to build a much more imposing infrastructure. By design, it’s simple, elegant, scalable infrastructure
  • Software: I consider this the secret sauce. All the key datacenter functions – compute, networking, and storage – run as software on the hypervisor. They work seamlessly together in a tightly integrated software layer. The software can be scaled out across many x86 nodes. We believe that VMware offers the most flexible and compelling option for customers to adopt the HCI model: a Hyper-Converged Software(HCS) stack based on vSphere, Virtual SAN and vCenter. Customers can deploy the software on a wide range of pre-certified vendor hardware. They get the benefits of HCI, including strong software–hardware integration and a single point of support, while having unparalleled options of hardware to choose from.

Benefits of HCI

This new IT architecture has many benefits for the end customer including:

  • Adaptable software architecture that takes advantage of commodity technology trends, such as: increasing CPU densities; new generations of solid-state storage and non-volatile memories; evolving interconnects (40GB, 100GB Ethernet) and protocols (NVMe)
  • Uniform operational model that allows customers to manage their entire IT infrastructure with a single set of tools.
  • Last but not least, streamlined procurement, deployment and support. Customers can build their infrastructure in a gradual and scalable way as demands evolve

My advice to companies who are not sure about HCI and what it does is – do your homework! It’s important to understand what the technology is and learn how this new paradigm of IT will change your business. There’s no denying that customers have observed lower TCO, flexibility, scalability, simplicity and higher performance with hyper-converged systems.

Looking to learn more about VMware Virtual SAN? The Virtual SAN Hands-on-Labs gives you an opportunity to experiment with many of the key features of Virtual SAN. You can also read more customer stories here and visit Virtual Blocks to learn more about Virtual SAN and VMware’s HCI strategy.

Choosing the Right Backup Solution

This week’s guest blogger is a member of IT Central Station’s Elite Squad – Chris Childerhose. Chris is a Technical Specialist for Storage, Virtualization & Backup. He’s published reviews of EMC Data Domain, Veeam, Nimble Storage, VMTurbo and other solutions. 

Every Virtualization and System Administrator deals with having the ability to recover servers, files, etc. and having a Backup Solutoin to help with recovery will ease the burden. But how do you know which one is right for you? How would you go about choosing the right solution that will help you in your daily tasks?

Software Criteria

When choosing a backup solution there are many things to consider based on your physical/virtual environment. What hypervisor are you running, what storage is being used, etc.? The best way to choose the right solution for the job is through evaluation and the more you evaluate the easier it will be to pick the right one for you. During an evaluation process you should consider things such as:

  • Compatibility with your chosen Hypervisor
  • Ease of installation and setup
  • Program ease of use and navigation
  • Backup scheduling
  • Reporting – is the reporting sufficient enough
  • Popular within the industry
  • Support for Physical and Virtual servers
  • And so on…and so on….

There are many criteria you can use in the evaluation stage and the above examples are but just a few. Composing a list prior to starting to look at software would be the recommended approach, this way you are looking at software that will fit most of your criteria prior to the evaluation/PoC stage.


When you have completed your criteria list and selected vendors for evaluation ensure to install all of them. Installing all of the products allows you to do a side-by-side comparison of the features you are looking for like job setup, ease of use, etc. Being able to see the products and how they work side-by-side gives you the best evaluation experience.

During the comparison stage look at something like ability to conduct SAN based backup versus LAN – how does each solution compare? Can the solution connect in to your SAN fabric allowing faster backups? If you cannot use SAN backups how will it affect the overall performance of the environment? After backups complete is there a reporting structure showing success/failure, length of time, amount of data, etc.? When working with the solution is navigation for job creation/modification simple? Is it cumbersome within the product and/or frustrating creating backups?

There are many things when comparing products to be aware of and answering questions as you go through the products is a great way to evaluate them.


Remember that there are many backup solutions out there for evaluation and choosing the right one can be a difficult decision. Evaluating the ones that appeal most to your organization is the best way to go and using a methodology for testing them is even better. In the end you will ensure your success by choosing the right solution for the job! Evaluate…..evaluate…..evaluate

AppDynamics Winter ’16 News

Today’s post features a guest article by Anand Akela, Director of Product Marketing for APM at AppDynamics.

Not long ago at AppDynamics AppSphere™ 2015, we announced the AppDynamics Winter ’16 Release (4.2) that brings significant enhancements to our Application Intelligence Platform to provide essential support for businesses’ digital transformation initiatives.

The new release extends the capabilities of AppDynamics’ application-centric Unified Monitoring solution, providing greater visibility into the user journey with detailed user sessions support, and expanded monitoring with Server and Browser Synthetic Monitoring and support for C/C++ applications. It also brings major upgrades to AppDynamics Application Analytics solution to provide richer, deeper insights into users, applications, and the correlations between application performance and business metrics.

Enhanced Unified Monitoring

AppDynamics Unified Monitoring provides end-to-end visibility from the end-user through all the application layers and their supporting infrastructure, enabling comprehensive management of end-user experience and application health.

In addition to general availability for Server Monitoring and C/C++ language support, the new release also introduces more than two dozen new extensions to expand AppDynamics’ monitoring capabilities to more application and infrastructure components, including many for Amazon Web Services. In addition, the new release brings numerous functional and usability enhancements for Java, .Net, Python, PHP, Node.js and Web Server monitoring solutions.

General availability of application-centric server monitoring

AppDynamics Server Monitoring is an application-centric server monitoring platform that proactively detects and helps quickly resolve server performance issues in context of business transactions. As a key component of the AppDynamics Unified Monitoring solution, server monitoring complements application and database monitoring to provide the end-to-end visibility needed to improve end-user experience and reduce monitoring complexity.

Server Monitoring provides comprehensive CPU, memory, disk, networking, and running processes metrics for Linux and Windows servers. With the new solution, customers can drill down (see figure 1) to detailed server metrics directly from the end-to-end application flow map when troubleshooting application performance issues.

Fig 1 : Drill down directly from the application flow map to view server details.


We are also announcing the Service Availability Monitoring (SAM) pack, which will be available as an add-on to Server Monitoring to help customers track the availability and basic performance metrics for HTTP services running on servers not natively monitored via an AppDynamics agent.


Fig 2: Service Availability Monitoring

General availability of C/C++ monitoring SDK

With the new release, AppDynamics now also supports monitoring of C/C++ applications via a monitoring SDK that enables the same real-time, end-to-end, user-to-database performance visibility as other supported languages, for rapid root-cause analysis and issue resolution.


Fig 3: C/C++ Application Performance Monitoring

These powerful capabilities are now available for C/C++ applications: automatic discovery and mapping of all tiers that service and interact with the C/C++ applications, automatic dynamic baselining, data collectors, and health rules, as well as managing key metrics including application load and response times, and key system resources including CPU, memory, and disk I/O.

Expanded Amazon Web Services monitoring with new extensions

Concurrent with the Winter ’16 Release, AppDynamics announced the availability of two dozen new extensions, including 19 for monitoring Amazon Web Services (AWS) components. These extensions are now available at the AppDynamics Exchange, joining more than one hundred extensions to enable monitoring application and infrastructure components not natively monitored by AppDynamics.


Fig 4: Extended coverage of AWS with new extensions

Powerful End-User Experience Monitoring

The Winter ’16 Release adds support for real-user sessions, providing a rich and detailed view into the user journey — what actions users take on a browser or a mobile device, step-by-step, as they move through the funnel, and how application performance impacts their journey. In addition, Browser Synthetic Monitoring becomes generally available. Together, Browser Synthetic Monitoring, and Browser and Mobile Real-User Monitoring with user sessions support, provide a comprehensive view of performance from the end-user perspective in a single intuitive dashboard.

Sessions monitoring as part of Browser/Mobile Real-User Monitoring

With the Winter ’16 Release, AppDynamics Browser and Mobile Real-User Monitoring now tracks and captures a user’s entire journey on a website or mobile app from the start until a configurable period of inactivity, or start-to-finish of a transaction sequence. Sessions can be viewed for individual users or a class of users. Sessions data is important for understanding funnel dynamics, tracking conversion and bounce rates, and seeing where in the sequence users had issues or disengaged. Performance issues and health violations and their causes are captured throughout a session, and the correlation with business impact can be captured, especially in conjunction with AppDynamics Application Analytics.

General Availability of Browser Synthetic Monitoring

Browser Synthetic Monitoring enables enterprises to ensure availability and performance of their websites even in the absence of real user load. Incorporating the highly regarded, open-source WebPageTest technology, Browser Synthetic Monitoring eliminates the variability inherent in real-user monitoring, and provides accurate measurements for baselining performance, competitive benchmarking, and management of third-party content performance. In addition to reporting on availability, Browser Synthetic Monitoring can be scripted to measure a sequence of transactions simulating an actual user’s workflow, including entering forms data, log-in credentials, and actions to test and ensure application logic.

Because it is a cloud-based solution, enterprises can scale their synthetic monitoring up or down as needed, schedule measurement jobs flexibly anytime 24/7, and choose which of more than two dozen points of presence around the globe they want to measure, and with which browsers. Measurements can also be set up to automatically re-test immediately on error, failure, or timeout to reduce or eliminate false positives for more intelligent alerting. There’s no need to wait for the next available testing window, by which time conditions may have changed. Browser Synthetic data can be viewed side-by-side with Browser and Mobile Real-User data in a single dashboard; Browser Synthetic Monitoring snapshots are also correlated with AppDynamics’ server-side APM for end-to-end traceability of issues.

Enhanced Application Analytics

AppDynamics Application Analytics is a rich, highly extensible, real-time analytics solution that gives IT and business teams deep insights into customer behaviors, and illuminates the correlations between application performance and business metrics. The updated Application Analytics provides support for more data sets, including all of AppDynamics APM data, log data, and APIs for importing/exporting external data sets; a custom SQL-based query language that enables unified search and log correlation with business transactions; a number of user interface enhancements and new out-of-the-box data widgets; and role-based access control.

These improvements allow enterprise users to immediately access rich customer profiles and behavioral data, and to quickly and conveniently perform customized queries to get the insights they need to more effectively engage their customers and make decisions that optimize business outcomes.

Advanced Query Language – AppDynamics Query Language (ADQL)

Application Analytics makes data accessible via the SQL-like, dynamic AppDynamics Query Language (ADQL), which enables advanced, fast, and nested data searches across multiple datasets, and supports rapid ad hoc analyses in real time.

Event Correlation between transactions and logs

Business, marketing and performance data is typically siloed and in many different formats. Application Analytics auto-correlates business transactions data from APM and log data from machine agent to provide unprecedented end-to-end analytics into the digital user journey, and the corresponding impact on business metrics.

Out-of-the-box visualization

This release introduces many new out-of-the-box visualization widgets to create interactive custom dashboards to provide actionable data. New out-of-the-box widgets include funnel widget to track drop off; user conversion widget; and widgets to do ad hoc analysis (multiple X and Y). Analytics widgets can be saved to custom dashboards, and reports can be scheduled for users and senior management.

Role-based access control

In addition, role-based access control is added to AppDynamics Application Analytics, enhancing security for sensitive business and customer data, while at the same time simplifying access for users within the context of their permissions. It now enforces strict access control to analytics data by job function and supports flexible data access control by data type. It also unifies multiple logins thereby improving user experience and adoption inside organization.

An enhanced platform that is an essential pillar of digital transformation

As the world becomes increasingly defined by software, enterprises of every description are pursuing digital transformation to satisfy user expectations for always-on, always effective engagement, and to realize the competitive efficiencies and advantages of digital delivery. The AppDynamics Application Intelligence Platform, with the far-reaching enhancements of the Winter ’16 Release, is designed to provide the next-generation application support needed to help enterprises achieve the user experience and operational success that is at the heart of effective digital transformation.

To read more about AppDynamics and the other popular APM solutions on our site visit the APM page today!

Best Practices to Prevent Privileged Account Abuse

Today we feature a guest blog post from SolarWinds. Check out this informative article about Best Practices and your SIEM solution.

If you are the IT security manager of a company that has even more than one system you have two scenarios:

  1. The system admins have individual super-user access to each of the datacenter servers.
  2. The admins share the privileged user credentials to those servers.

The former is best practice, and the latter a headache; especially if one of them becomes malicious, for any reasons. steampunk_victorian_magnifying_glass_icon_mk6_by_pendragon1966-d5h4eq9

In scenario #2, when a server goes down, you won’t be able to quickly identify who made what changes, whether accidentally or deliberately causing service disruption. In all probability, you may guess who the malicious person is, but it will be hard to prove as it’s a shared account. Now, what if you have hundreds of servers, and 50 system administrators sharing credentials? This is all getting to a level that’s too complicated to deal with while investigating security breach.

The 2015 Verizon Data Breach Investigations Report states that more than half of the security incidents (55%) were from privilege accounts abuse. Roughly, that’s about 44,000 incidents. And, that’s a worrisome figure, though not unmanageable with the right security strategy and tools. This may be an insider threat, or simply a case of using a compromised super-user account from the outside – it could be one of your ex-employees. You may never know if you don’t have the right tools and processes in place.

So, what’s the best way forward?

Stop sharing passwords

Sharing passwords among system admins or using service accounts only complicates credential management, and makes tracking difficult from an investigative or audit standpoint. Look for a solution that will integrate with your existing active directory setup, and one that will help you create groups and delegate permissions individually. When someone does login with administrator or privileged accounts, you should be alerted or receive a regular report to review that activity.

Collect and manage logs centrally

Having a centralized console to automatically collect, monitor and audit events relating to super-user accounts helps in faster incident response or breach mitigation. You may dig into the specifics of each and every log file, and analyze patterns. If this exercise is manual, it’s cumbersome and inefficient. You have to automate it with the right tools and security strategy.

Setup Notifications/alerts in case of an anomalous activity

Create individual notifications/alerts for each type of login event that applies to a group or groups. Clearly define the correlation logic with respect to a specific activity, number of events within a time interval, and the resulting actions. Examples:

  • Sending an email to the IT manager when a new member is being added to an admin group
  • Alerting when multiple administrator logon failures are happening in a span of 1 minute

 Become compliant & schedule audits

Regulatory compliance standards such as PCI DSS, SOX, HIPAA, etc., require that you have full accountability of your super-user accounts and activities. Periodic audits of the administrator account or admin group accounts is essential to not only identify anomalous behavior(like account changes, user logon/logoff, software installs, failed logons, stopped processes, etc.) but also to comply with industry requirements and audits.

Whether you manage a startup environment or an enterprise, curbing privileged account abuse should be one of your top priorities in your security policy. The policy must do away with manual time-consuming log analyses and threat detection, and move towards an automated solution encompassing security information and event management.

HP ConvergedSystem 500 wins MVP award from CTR

Today we feature a guest blog post from Whitney Garcia, of HP. Check out what Whitney has to say about HP ConvergedSystem below.

The HP ConvergedSystem 500 for SAP HANA powered by Intel is the most cost effective big data solution from HP, designed for business who may be just getting their feet wet with SAP HANA. This solution brings flexibility, reliability and now even greater performance to your datacenter.

Announced May 2015 at SAP Sapphire NOW (and re-introduced at HP Discover Las Vegas), the HP ConvergedSystem 500’s latest update includes availability with the new Intel Xeon E7 v3 architecture. This update means you can experience up to a 39% increase in workload performance and the solution now delivers 12 new scale-up and scale-out configurations.

But it wasn’t the innovative new updates alone that caught the eyes and ears of Computer Technology Review (CTR) when they started selecting products for their 2015 Most Valuable Products awards. It was the flexibility and reliability that set HP ConvergedSystem 500 for SAP HANA apart.

HP ConvergedSystem 500 for SAP HANA’s ability to bring customers flexibility through a choice of operating systems, either SUSE Linux or Red Hat Linux OS, means that customers can use an operating systems that they may already be familiar with when beginning their SAP HANA journey, making it that much easier to get up and running. Additionally, the HP ConvergedSystem 500 also offers reliability through support for the latest-generation SAP business suite, SAP S/4HANA.

CTR also recognized HP ConvergedSystem 500 for SAP HANA’s comprehensive data backup and recovery solution – the hardware infrastructure, monitoring and management features that, “Deliver availability, continuity and reliability through HP Serviceguard for SAP HANA, the industry’s only automated, high availability disaster recovery solution.”

CTR’s MVP award helped validate what HP ConvergedSystem solutions for SAP HANA have set out do to since their inception: to bring lower TCO and higher value to customers through innovation, functionality and affordability.

More on the MVP awards
For the first time in CTR’s 35-year publishing history, their editorial judging panel awarded a select number of honorary MVP awards as a way of recognizing products that are truly in a class by themselves. HP ConvergedSystem 500 for SAP HANA won a Most Valuable Product award in the Data Storage category.

More information on HP ConvergedSystem solutions for SAP HANA’s latest updates can be found in this article, HP delivers more flexibility and choice in your infrastructure.

And check out the entire Most Valuable Product Award article from CTR, and the full list of 2015 products here.

To see reviews about HP ConvergedSystem on IT Central Station, check out reviews here.

Digital and Mobile Adoption: An In‐Depth Look at How Digital Transformation Works

Contributed by James Quin, Senior Director, Content and C‐Suite Communities at CDM Media

The adoption of digital and mobile initiatives is a major undertaking for enterprises in every industry. Part of the process is determining the competitive advantages that these initiatives would bring into organizations. This is an in‐depth look at how digital transformation works and what it looks like:

Digital transformation is sweeping pretty much every industry sector right now, and it’s setting out to be one of the most influential changes that we’ve seen in many years. Many will argue that digital transformation is nothing more than the same “Social/Cloud/Analytics/Mobile” technology trend that we’ve been experiencing for the last few years. In a certain way they are right, but it’s so much more than just baseline technology adoption.


Digital transformation is about changing the way enterprises do business, about how they interact with their customers and potential customers, how they develop their products and services, and indeed about how they determine which products and services to offer. It begins with mobile, because mobile puts the requisite technology for pervasive interaction into everyone’s hands. Of course, the way in which they are interacting is through social channels that break down old‐school hierarchical interaction models allowing for a more efficient exchange and even creation of information. That information can be now analyzed in ways that were inconceivable before because not only are we looking at more information, we’re looking at different information, and we’re looking at it with a speed that we never have before. To make use of these new insights, enterprises need to be agile, to be responsive and more and more the cloud is becoming the computing platform that allows for this level of dynamism because it is accessible, on‐demand, and scalable.

Initially we referred to these technologies as “disruptive” because they changed the way we worked. We’ve come to see them instead as “transformative” because they changed the way we worked. The so-called CAMS (Cloud, Analytics, Mobile, Social) technologies have been with us for a few years now. They were introduced to us as ‘disruptive technologies’ and that title has certainly been valid because they’ve really transformed the way that businesses operate. While cloud has primarily had an impact on how enterprises operate internally, the other three have very definitely had an impact in terms of how businesses relate to and interact with the world at large, particularly when it comes to clients and partners.

I think it’s fair to say that there isn’t a single organization that hasn’t invested in at least one of these technologies to some degree, and realistically a significant subset that have invested in all of them to a great degree. The two that are really driving change of course are analytics (to allow for greater understanding of, and ultimately engagement with, clients and partners) and mobility (to be the channel through which data is captured for analytics and engagement is created afterwards). These two technologies, technologies that really didn’t exist just a few years ago, have rapidly become table stakes for successful businesses. That doesn’t mean that everyone is where they need to be, but everyone certainly is somewhere on the journey.

To see IT Central Station’s full report on Mobile App Platforms, check out

HP 3PAR StoreServ All-Flash News

This week features a guest blog post by Calvin Zito, Storage Expert at HP. Thanks Calvin for your contribution!

To see real user reviews of HP 3PAR, check out IT Central Station.

I like telling stories with video, so here’s another video—this time of our latest ChalkTalk. Here you’ll learn about the new HP 3PAR StoreServ 20800 and 20850. Now how about if I summarize a few things for you here. The announcement includes 25% lower-cost flash capacity, a new class of massively scalable HP 3PAR and flash-optimized data services for ITaaS consolidation and hybrid IT projects.

  • With the ultra-dense and scalable HP 3PAR StoreServ 20000, we’ve dropped the price of all-flash to $1.50 per usable GB.  If we add in the savings from replicas, we drive down the $/GB to 25 cents!
  • As you saw in the ChalkTalk, the all-flash HP 3PAR StoreServ 20850 can deliver over 3.2 million IOPs at sub-millisecond latency and over 75GB/second. On top of that, it can scale to 15PB of useable capacity. Worth mentioning again (with details in the ChalkTalk) is the huge power, cooling and space savings compared to other vendors.
  • We’re introducing a new 3.84TB SSD drive. With our ASIC-enabled data compaction increasing usable capacity by 75%, it brings down the cost of all-flash storage to $1.50/GB usable. That really is flash for the masses!

I also want to summarize some other software enhancements that come with the 20000 System. I talked to a few of our bloggers in advance of the announcement and they were very excited to hear about these enhancements.

  • 3PAR Persistent Checksum. This ensures end-to-end data integrity from the application server through to the storage array for any workload and is completely transparent to the servers and applications. This is implemented in the new 3PAR Gen5 Thin Express ASIC. Here’s another feature you can add to the list that has no impact on array performance.
  • Asynchronous Streaming. This allows for remote replication where latency, distance and recovery are optimized.
  • Bi-directional Peer Motion. We’ve talked about Federation and Peer Motion in the past. Now we’re extending HP 3PAR Peer Motion to include non-disruptive, bi-directional data movement for up to four arrays. This is an entirely native capability without SAN virtualization appliance in the data path. With 3PAR storage federation, you can aggregate up to 60PB of useable capacity with over 10 million IOPs and 300 GB/second. And with a single click, workloads can move between federation members to dynamically rebalance storage resources for cost and performance optimization.
  • HP 3PAR Online Import. This now supports simple migration from HDS TagmaStore Network Storage Controller, US, and VSP systems. So we ae adding to the support we already had for EMC VMAX, VNX and Clariion CX4 as well as HP EVAs.

I’m completely pumped with the enhancements to HP 3PAR – and there’s more to come throughout 2015!

You can check it out for yourself! Read informative reviews, see side-by-side comparisons, and join the discussions taking place in one of our fastest growing categories.