Continuous Compliance: How to Combat Regulatory Fatigue

Guest post by Reuven Harrison CTO and Co-Founder, Tufin

Whether it’s protecting consumer credit card numbers, a company’s intellectual property, or a patient’s medical records, most of the government and industry regulations in place today were designed to protect the privacy and safety of people, as well as valuable applications and data. Given the escalating global problem with privacy and security, these regulations were needed. However, the downside of this is that enterprises must now operate under the requirements of multiple regulations and security standards. What we’re seeing as a result is something I call “regulatory fatigue,” where enterprises face a jungle of constraining regulations that ultimately inhibit their agility and productivity.

For many of our customers, the compliance burden is growing annually, but the budget for supporting it is not. There can be several audits per year for separate regulations such as PCI DSS, SOX, and so on. Additionally, it’s becoming more common today for business partners to require a controls assessment before entering into a services contract. Unfortunately for many companies, manual processes remain prevalent. For example, many compliance managers are still tracking their organization’s regulatory status in a manual spreadsheet, increasing their exposure to risk and even hefty compliance-violation fines.

Enterprises can reduce their regulatory fatigue and maintain their agility by shifting their approach to one of “continuous compliance.” That is, attaining a state where all compliance requirements are met, and then continuously maintaining that state. It’s easier and less time-consuming than the traditional “snapshot-in-time” approach. And when continuous compliance is achieved by automating policy violation alerts, remediation efforts and change processes, it becomes even more efficient and controlled, avoiding the delays and mis-configurations often associated with manual procedures.

Our experts have put together a survival guide to help CISOs, CSOs, Chief Compliance Officers and other stakeholders who must ensure regulatory compliance within their organizations. This guide walks through some of the key regulations in every industry, and gives detailed steps on how to adopt the continuous compliance approach.

If you’re ready to put an end to regulatory fatigue, download the free compliance survival guide today.

Understanding The Layers Of Hyper-Converged Infrastructure

Guest post by Michael Haag, Product Line Marketing Manager in the Storage and Availability Business Unit at VMware.

We’re almost half way through 2016 and it continues to shape up to be the year of hyper-convergence. Combine faster CPUs, lower cost flash (with exciting technologies on the horizon) and software innovation with the majority of data centers using server virtualization, now is the time to extend existing infrastructure investments with newer, modern solutions.

Three months ago, VMware introduced Virtual SAN 6.2 and gave this hyper-converged infrastructure (HCI) stack a name: VMware Hyper-Converged Software (VMware HCS). Virtual SAN 6.2 introduced a major set of new features to help improve space efficiency and management (check out the What’s New in 6.2 blog for those details). The latter is a marketing name to help us refer to the software stack of Virtual SAN, vSphere and vCenter Server.

With all the various terms and names being used to refer to HCI and the components, I want to take a few minutes to help clarify the terms we use at VMware and break down our view of HCI.

Does Virtual SAN = HCI?

Short answer: no. We sometimes use HCI, VMware HCS and even Virtual SAN in similar ways to refer to a solution where compute and storage functions are delivered from the hypervisor software on a common x86 platform (i.e. HCI). While all those terms are related to HCI, they refer to specific components or groups of components that make up a full hyper-converged infrastructure solution.

It’s important to understand that Virtual SAN on its own is not hyper-converged infrastructure. Virtual SAN is software-defined storage that is uniquely embedded directly in vSphere. Virtual SAN refers to the software that virtualizes the compute layer by abstracting and pooling together the direct attached storage devices (SSDs, HDDs, PCIe, etc…) into shared storage.

Because Virtual SAN is so tightly integrated with (and dependent on) vSphere, whenever you talk about running Virtual SAN, the assumption is the compute virtualization piece from vSphere is there too.

Similarly, vSphere with Virtual SAN requires hardware to run it—as someone reminded me recently, software without hardware is about as useful as an ejection seat on a helicopter (think about that one for a sec if needed).

vmware blog image 1

As the image shows, HCI refers to the overall solution that includes two major components: hyper-converged software and industry-standard hardware. Without both of those pieces, you do not have HCI. From VMware, our software stack is VMware HCS, but that stack can look different for different vendors.

VMware has a unique advantage in that VMware HCS is a tightly integrated software stack embedded in the kernel and is the only vendor that provides such level of integration.

This architectural advantage delivers a number of benefits including: Performance, simplicity, reliability and efficiency.

Do all HCI solutions look the same?

While all HCI solutions generally follow this blueprint of having a software stack built on a hypervisor that runs on industry-standard hardware, in the end they can look very different and can have varying degrees of integration.

All HCI solutions generally follow the same blueprint outlined above. They start with server virtualization (some hypervisor, which is more times than not vSphere) and then add in software-defined storage capabilities, which can be delivered tightly integrated like Virtual SAN or bolted on as a virtual storage appliance (separate VM on each server). That software is then loaded onto an x86 platform.

Some vendors package that together into a turnkey appliance that can be bought as a single sku, making those HCI layers less transparent and the deployment easier. One example of that type of HCI solution includes VCE VxRail HCI Appliance (which we’ve done with EMC) and is built on the full VMware HCS stack.

VMware HCS also offers you the ability to customize your hardware platform. You can choose from over 100 pre-certified x86 platforms from all of the major server vendors. We call these hardware options our Virtual SAN Ready Nodes.

An advantage to the Ready Node approach is that you can choose to deploy hardware that you already know. Equally important, but often overlooked, is that the relationships that you have with a partner or vendor, the procurement process you have in place and the support agreements with your preferred server vendor can all be leveraged. No need to create new support and procurement silos. No need to learn a new hardware platform including how to manage, install and configure it.

You can also read unbiased VMware Virtual SAN reviews from the tech community on IT Central Station.

 

 

What’s All the Fuss About Hyper-Converged Infrastructure?

Guest post By Anita Kibunguchy – Product Marketing Manager, Storage & Availability, VMware

Technology has made it so easy that customers looking to purchase a product or service need to simply look online for reviews. Did you know that 80% of people try new things because of recommendations from friends? It’s the reason why e-commerce companies like Amazon have thrived! Customers want to hear what other customers have to say about: The product, their experience with the brand, durability, support, purchase decisions, recommendations … the list goes on. This is no different in the B2B space. That is why IT Central Station is such an invaluable resource for customers looking to adopt new technologies like hyper-converged infrastructure (HCI) with VMware Virtual SAN. Customers get a chance to read unbiased product reviews from the tech community which makes them smart and much more informed buyers.

What is HCI?

Speaking of datacenter technologies, am sure you’ve heard about hyper-converged infrastructure as the next big thing. It is not surprising that according to IDC, hyper-converged infrastructure (HCI) is the fastest growing segment of the converged (commodity-based hardware) infrastructure market which is poised to reach $4.8B in 2019.

Hyper-Converged Systems

The top-level definition of HCI is actually quite simple.  HCI is fundamentally about the convergence of compute, networking and storage onto shared industry-standard x86 building blocks.  It’s about moving the intelligence out of dedicated physical appliances and instead running all the datacenter functions as software on the hypervisor.  It’s about eliminating the physical hardware silos to adopt a simpler infrastructure based on scale-out x86.

Perhaps more fundamentally, it’s also about enabling private datacenters to adopt an architecture similar to the one used by large web-scale companies like Facebook, Google and Amazon. HCI is by no means confined to low-end use cases like ROBO and SMB (although it does great there too). The real promise of HCI is to provide the best building block to implement a full-blown Software Defined Data Center.

When thinking about HCI, hardware and software are fundamental to this new infrastructure.

  • Hardware: HCI includes industry-standard x86 systems that can be scaled up or out. Almost like small lego bricks stacked together to build a much more imposing infrastructure. By design, it’s simple, elegant, scalable infrastructure
  • Software: I consider this the secret sauce. All the key datacenter functions – compute, networking, and storage – run as software on the hypervisor. They work seamlessly together in a tightly integrated software layer. The software can be scaled out across many x86 nodes. We believe that VMware offers the most flexible and compelling option for customers to adopt the HCI model: a Hyper-Converged Software(HCS) stack based on vSphere, Virtual SAN and vCenter. Customers can deploy the software on a wide range of pre-certified vendor hardware. They get the benefits of HCI, including strong software–hardware integration and a single point of support, while having unparalleled options of hardware to choose from.

Benefits of HCI

This new IT architecture has many benefits for the end customer including:

  • Adaptable software architecture that takes advantage of commodity technology trends, such as: increasing CPU densities; new generations of solid-state storage and non-volatile memories; evolving interconnects (40GB, 100GB Ethernet) and protocols (NVMe)
  • Uniform operational model that allows customers to manage their entire IT infrastructure with a single set of tools.
  • Last but not least, streamlined procurement, deployment and support. Customers can build their infrastructure in a gradual and scalable way as demands evolve

My advice to companies who are not sure about HCI and what it does is – do your homework! It’s important to understand what the technology is and learn how this new paradigm of IT will change your business. There’s no denying that customers have observed lower TCO, flexibility, scalability, simplicity and higher performance with hyper-converged systems.

Looking to learn more about VMware Virtual SAN? The Virtual SAN Hands-on-Labs gives you an opportunity to experiment with many of the key features of Virtual SAN. You can also read more customer stories here and visit Virtual Blocks to learn more about Virtual SAN and VMware’s HCI strategy.

AppDynamics Winter ’16 News

Today’s post features a guest article by Anand Akela, Director of Product Marketing for APM at AppDynamics.

Not long ago at AppDynamics AppSphere™ 2015, we announced the AppDynamics Winter ’16 Release (4.2) that brings significant enhancements to our Application Intelligence Platform to provide essential support for businesses’ digital transformation initiatives.

The new release extends the capabilities of AppDynamics’ application-centric Unified Monitoring solution, providing greater visibility into the user journey with detailed user sessions support, and expanded monitoring with Server and Browser Synthetic Monitoring and support for C/C++ applications. It also brings major upgrades to AppDynamics Application Analytics solution to provide richer, deeper insights into users, applications, and the correlations between application performance and business metrics.

Enhanced Unified Monitoring

AppDynamics Unified Monitoring provides end-to-end visibility from the end-user through all the application layers and their supporting infrastructure, enabling comprehensive management of end-user experience and application health.

In addition to general availability for Server Monitoring and C/C++ language support, the new release also introduces more than two dozen new extensions to expand AppDynamics’ monitoring capabilities to more application and infrastructure components, including many for Amazon Web Services. In addition, the new release brings numerous functional and usability enhancements for Java, .Net, Python, PHP, Node.js and Web Server monitoring solutions.

General availability of application-centric server monitoring

AppDynamics Server Monitoring is an application-centric server monitoring platform that proactively detects and helps quickly resolve server performance issues in context of business transactions. As a key component of the AppDynamics Unified Monitoring solution, server monitoring complements application and database monitoring to provide the end-to-end visibility needed to improve end-user experience and reduce monitoring complexity.

Server Monitoring provides comprehensive CPU, memory, disk, networking, and running processes metrics for Linux and Windows servers. With the new solution, customers can drill down (see figure 1) to detailed server metrics directly from the end-to-end application flow map when troubleshooting application performance issues.

Fig 1 : Drill down directly from the application flow map to view server details.

Untitled

We are also announcing the Service Availability Monitoring (SAM) pack, which will be available as an add-on to Server Monitoring to help customers track the availability and basic performance metrics for HTTP services running on servers not natively monitored via an AppDynamics agent.

Untitled

Fig 2: Service Availability Monitoring

General availability of C/C++ monitoring SDK

With the new release, AppDynamics now also supports monitoring of C/C++ applications via a monitoring SDK that enables the same real-time, end-to-end, user-to-database performance visibility as other supported languages, for rapid root-cause analysis and issue resolution.

Untitled

Fig 3: C/C++ Application Performance Monitoring

These powerful capabilities are now available for C/C++ applications: automatic discovery and mapping of all tiers that service and interact with the C/C++ applications, automatic dynamic baselining, data collectors, and health rules, as well as managing key metrics including application load and response times, and key system resources including CPU, memory, and disk I/O.

Expanded Amazon Web Services monitoring with new extensions

Concurrent with the Winter ’16 Release, AppDynamics announced the availability of two dozen new extensions, including 19 for monitoring Amazon Web Services (AWS) components. These extensions are now available at the AppDynamics Exchange, joining more than one hundred extensions to enable monitoring application and infrastructure components not natively monitored by AppDynamics.

Untitled

Fig 4: Extended coverage of AWS with new extensions

Powerful End-User Experience Monitoring

The Winter ’16 Release adds support for real-user sessions, providing a rich and detailed view into the user journey — what actions users take on a browser or a mobile device, step-by-step, as they move through the funnel, and how application performance impacts their journey. In addition, Browser Synthetic Monitoring becomes generally available. Together, Browser Synthetic Monitoring, and Browser and Mobile Real-User Monitoring with user sessions support, provide a comprehensive view of performance from the end-user perspective in a single intuitive dashboard.

Sessions monitoring as part of Browser/Mobile Real-User Monitoring

With the Winter ’16 Release, AppDynamics Browser and Mobile Real-User Monitoring now tracks and captures a user’s entire journey on a website or mobile app from the start until a configurable period of inactivity, or start-to-finish of a transaction sequence. Sessions can be viewed for individual users or a class of users. Sessions data is important for understanding funnel dynamics, tracking conversion and bounce rates, and seeing where in the sequence users had issues or disengaged. Performance issues and health violations and their causes are captured throughout a session, and the correlation with business impact can be captured, especially in conjunction with AppDynamics Application Analytics.

General Availability of Browser Synthetic Monitoring

Browser Synthetic Monitoring enables enterprises to ensure availability and performance of their websites even in the absence of real user load. Incorporating the highly regarded, open-source WebPageTest technology, Browser Synthetic Monitoring eliminates the variability inherent in real-user monitoring, and provides accurate measurements for baselining performance, competitive benchmarking, and management of third-party content performance. In addition to reporting on availability, Browser Synthetic Monitoring can be scripted to measure a sequence of transactions simulating an actual user’s workflow, including entering forms data, log-in credentials, and actions to test and ensure application logic.

Because it is a cloud-based solution, enterprises can scale their synthetic monitoring up or down as needed, schedule measurement jobs flexibly anytime 24/7, and choose which of more than two dozen points of presence around the globe they want to measure, and with which browsers. Measurements can also be set up to automatically re-test immediately on error, failure, or timeout to reduce or eliminate false positives for more intelligent alerting. There’s no need to wait for the next available testing window, by which time conditions may have changed. Browser Synthetic data can be viewed side-by-side with Browser and Mobile Real-User data in a single dashboard; Browser Synthetic Monitoring snapshots are also correlated with AppDynamics’ server-side APM for end-to-end traceability of issues.

Enhanced Application Analytics

AppDynamics Application Analytics is a rich, highly extensible, real-time analytics solution that gives IT and business teams deep insights into customer behaviors, and illuminates the correlations between application performance and business metrics. The updated Application Analytics provides support for more data sets, including all of AppDynamics APM data, log data, and APIs for importing/exporting external data sets; a custom SQL-based query language that enables unified search and log correlation with business transactions; a number of user interface enhancements and new out-of-the-box data widgets; and role-based access control.

These improvements allow enterprise users to immediately access rich customer profiles and behavioral data, and to quickly and conveniently perform customized queries to get the insights they need to more effectively engage their customers and make decisions that optimize business outcomes.

Advanced Query Language – AppDynamics Query Language (ADQL)

Application Analytics makes data accessible via the SQL-like, dynamic AppDynamics Query Language (ADQL), which enables advanced, fast, and nested data searches across multiple datasets, and supports rapid ad hoc analyses in real time.

Event Correlation between transactions and logs

Business, marketing and performance data is typically siloed and in many different formats. Application Analytics auto-correlates business transactions data from APM and log data from machine agent to provide unprecedented end-to-end analytics into the digital user journey, and the corresponding impact on business metrics.

Out-of-the-box visualization

This release introduces many new out-of-the-box visualization widgets to create interactive custom dashboards to provide actionable data. New out-of-the-box widgets include funnel widget to track drop off; user conversion widget; and widgets to do ad hoc analysis (multiple X and Y). Analytics widgets can be saved to custom dashboards, and reports can be scheduled for users and senior management.

Role-based access control

In addition, role-based access control is added to AppDynamics Application Analytics, enhancing security for sensitive business and customer data, while at the same time simplifying access for users within the context of their permissions. It now enforces strict access control to analytics data by job function and supports flexible data access control by data type. It also unifies multiple logins thereby improving user experience and adoption inside organization.

An enhanced platform that is an essential pillar of digital transformation

As the world becomes increasingly defined by software, enterprises of every description are pursuing digital transformation to satisfy user expectations for always-on, always effective engagement, and to realize the competitive efficiencies and advantages of digital delivery. The AppDynamics Application Intelligence Platform, with the far-reaching enhancements of the Winter ’16 Release, is designed to provide the next-generation application support needed to help enterprises achieve the user experience and operational success that is at the heart of effective digital transformation.

To read more about AppDynamics and the other popular APM solutions on our site visit the APM page today!

Best Practices to Prevent Privileged Account Abuse

Today we feature a guest blog post from SolarWinds. Check out this informative article about Best Practices and your SIEM solution.

If you are the IT security manager of a company that has even more than one system you have two scenarios:

  1. The system admins have individual super-user access to each of the datacenter servers.
  2. The admins share the privileged user credentials to those servers.

The former is best practice, and the latter a headache; especially if one of them becomes malicious, for any reasons. steampunk_victorian_magnifying_glass_icon_mk6_by_pendragon1966-d5h4eq9

In scenario #2, when a server goes down, you won’t be able to quickly identify who made what changes, whether accidentally or deliberately causing service disruption. In all probability, you may guess who the malicious person is, but it will be hard to prove as it’s a shared account. Now, what if you have hundreds of servers, and 50 system administrators sharing credentials? This is all getting to a level that’s too complicated to deal with while investigating security breach.

The 2015 Verizon Data Breach Investigations Report states that more than half of the security incidents (55%) were from privilege accounts abuse. Roughly, that’s about 44,000 incidents. And, that’s a worrisome figure, though not unmanageable with the right security strategy and tools. This may be an insider threat, or simply a case of using a compromised super-user account from the outside – it could be one of your ex-employees. You may never know if you don’t have the right tools and processes in place.

So, what’s the best way forward?

Stop sharing passwords

Sharing passwords among system admins or using service accounts only complicates credential management, and makes tracking difficult from an investigative or audit standpoint. Look for a solution that will integrate with your existing active directory setup, and one that will help you create groups and delegate permissions individually. When someone does login with administrator or privileged accounts, you should be alerted or receive a regular report to review that activity.

Collect and manage logs centrally

Having a centralized console to automatically collect, monitor and audit events relating to super-user accounts helps in faster incident response or breach mitigation. You may dig into the specifics of each and every log file, and analyze patterns. If this exercise is manual, it’s cumbersome and inefficient. You have to automate it with the right tools and security strategy.

Setup Notifications/alerts in case of an anomalous activity

Create individual notifications/alerts for each type of login event that applies to a group or groups. Clearly define the correlation logic with respect to a specific activity, number of events within a time interval, and the resulting actions. Examples:

  • Sending an email to the IT manager when a new member is being added to an admin group
  • Alerting when multiple administrator logon failures are happening in a span of 1 minute

 Become compliant & schedule audits

Regulatory compliance standards such as PCI DSS, SOX, HIPAA, etc., require that you have full accountability of your super-user accounts and activities. Periodic audits of the administrator account or admin group accounts is essential to not only identify anomalous behavior(like account changes, user logon/logoff, software installs, failed logons, stopped processes, etc.) but also to comply with industry requirements and audits.

Whether you manage a startup environment or an enterprise, curbing privileged account abuse should be one of your top priorities in your security policy. The policy must do away with manual time-consuming log analyses and threat detection, and move towards an automated solution encompassing security information and event management.

HP ConvergedSystem 500 wins MVP award from CTR

Today we feature a guest blog post from Whitney Garcia, of HP. Check out what Whitney has to say about HP ConvergedSystem below.

The HP ConvergedSystem 500 for SAP HANA powered by Intel is the most cost effective big data solution from HP, designed for business who may be just getting their feet wet with SAP HANA. This solution brings flexibility, reliability and now even greater performance to your datacenter.

Announced May 2015 at SAP Sapphire NOW (and re-introduced at HP Discover Las Vegas), the HP ConvergedSystem 500’s latest update includes availability with the new Intel Xeon E7 v3 architecture. This update means you can experience up to a 39% increase in workload performance and the solution now delivers 12 new scale-up and scale-out configurations.

But it wasn’t the innovative new updates alone that caught the eyes and ears of Computer Technology Review (CTR) when they started selecting products for their 2015 Most Valuable Products awards. It was the flexibility and reliability that set HP ConvergedSystem 500 for SAP HANA apart.

HP ConvergedSystem 500 for SAP HANA’s ability to bring customers flexibility through a choice of operating systems, either SUSE Linux or Red Hat Linux OS, means that customers can use an operating systems that they may already be familiar with when beginning their SAP HANA journey, making it that much easier to get up and running. Additionally, the HP ConvergedSystem 500 also offers reliability through support for the latest-generation SAP business suite, SAP S/4HANA.

CTR also recognized HP ConvergedSystem 500 for SAP HANA’s comprehensive data backup and recovery solution – the hardware infrastructure, monitoring and management features that, “Deliver availability, continuity and reliability through HP Serviceguard for SAP HANA, the industry’s only automated, high availability disaster recovery solution.”

CTR’s MVP award helped validate what HP ConvergedSystem solutions for SAP HANA have set out do to since their inception: to bring lower TCO and higher value to customers through innovation, functionality and affordability.

More on the MVP awards
For the first time in CTR’s 35-year publishing history, their editorial judging panel awarded a select number of honorary MVP awards as a way of recognizing products that are truly in a class by themselves. HP ConvergedSystem 500 for SAP HANA won a Most Valuable Product award in the Data Storage category.

More information on HP ConvergedSystem solutions for SAP HANA’s latest updates can be found in this article, HP delivers more flexibility and choice in your infrastructure.

And check out the entire Most Valuable Product Award article from CTR, and the full list of 2015 products here.

To see reviews about HP ConvergedSystem on IT Central Station, check out reviews here.

HP 3PAR StoreServ: What are Customers Saying?

This week features a guest blog post by Calvin Zito, Storage Expert at HP. Thanks Calvin for your contribution!

Recently TechTarget named the HP 3PAR StoreServ All-Flash Array (AFA) the product of the year in the category of all-flash.

This is such a satisfying recognition for HP 3PAR AFA because in the early days of HP announcing the 7450 AFA, several AFA start-ups claimed because our AFA wasn’t built from scratch, it didn’t qualify as all-flash storage. I even had one AFA competitor unfollow me on Twitter. He told me “your defense of HP is dutiful but you are displaying a lack of understanding.”

Well, I think HP and I have a very deep degree of understanding – not sure I can say the same for the competitor. This TechTarget recognition and many of the recent wins I’ve seen against the AFA start-ups are proof that the HP 3PAR AFA should be considered by any customer who is looking at all-flash storage.

What is it that is different about HP 3PAR that qualifies it to be considered as an all-flash array?  Here are just a few key things to consider:

  • The performance of the 3PAR AFA is an industry leader, including start-up AFAs beating many of the all-flash start-ups.
  • Deduplication with 3PAR is done via our Gen4 ASIC. That means dedup happens at virtually “line speed”.
  • With 3PAR architecture, we have a unique advantage with AFA. With 3PAR Adaptive Sparing we get an additional 20% capacity out of industry standard SSD – so a 1.6TB SSD used by other AFAs is a 1.9TB drive with HP 3PAR.
  • Because HP 3PAR is an established array (not built from scratch AFA), it has a robust set of data services that no start-up AFA can claim.

TechTarget is the media company behind SearchStorage.com – if you want to read the AFA Product of the Year award for yourself, you can find it here.

What are customers saying?

These kind of accolades are fantastic but nothing beats hearing directly from our customers. I recently had the opportunity to talk to Christian Teeft. Christian is the CTO of Latisys. Latisys is a US-based hybrid cloud services provider who has standardized on HP 3PAR StoreServ Storage, including the 3PAR all-flash array. Here’s my video with Christian.

You can see many more customer reviews on HP 3PAR StoreServ All-Flash by going to this IT Central Station page. If you want to learn more about HP 3PAR All-Flash, check out my blog at www.hp.com/storage/blog. And I’m happy to answer any questions you have – find me on Twitter as @HPStorageGuy.

Disclaimer: IT Central Station does not endorse or recommend any products or services. The views and opinions in this post do not reflect the opinions of IT Central Station.

Is flash storage now mainstream?

This week IT Central Station is featuring a guest blog post by Calvin Zito, Storage Expert at HP. Thanks Calvin for your contribution!

I’ve been thinking about what was the most significant storage trend in 2014. In the past, I’ve been caught up in “hype” – those topics that get more attention than they probably deserve or have vendors all creating their own definition and creating confusion along the way. So I’ve become sensitive to sniffing out hype.

Four or five years ago, solid-state disk (SSD) – or flash – was one of the much-hyped topics. I remember one storage vendor saying that all hard drive disks (HDDs) would be replaced with SSDs by 2013. Well, obviously 2013 came and went, and while SSD was gaining momentum, it wasn’t mainstream. Mark Peters, analyst with Enterprise Strategy Group said, “Solid-state storage is an important emerging change, not just an addition or tweak, in the world of storage.”

Is flash now mainstream?

In a word, yes. So what’s changed now that flash has gone mainstream? Consider several factors:

  • Cost of SSDs has dropped dramatically – by more than 100% since the beginning of 2014
  • Deduplication – an emerging technology used to remove duplicate data that can increase the useable capacity of SSD by 4 to 20 or even 30 times
  • Combining those two factors, the cost per gigabyte (GB) of flash has dropped to the cost of 15k RPM HDDs – at least it has for HP 3PAR StoreServ

More flash for your cash

Adaptive Sparing.pngOne of the unique HP 3PAR innovations that has allowed us to drive down the cost of flash even further is something called Adaptive Sparing. 3PAR has a data striping mechanism called wide striping. This spreads data across all of the drives in 3PAR. As a result of wide striping, we have worked with SSD vendors to have more SSD drive capacity available to use. It releases the spare capacity present in each SSD media to the SSD itself, allowing SSDs to use that spare capacity as additional capacity. For example, what other vendors sell as a 1.6 terabyte (TB) drive is a 1.9 TB drive with HP 3PAR.

I have a short video that tells the flash storage going mainstream story from a customer’s perspective. Lee Pedlow is the Sr. Director of Production Services at Sony Network Entertainment International. He joins the head of HP Storage David Scott to discuss the benefits they see with flash storage. Lee talks about:

  • Doing a “bake-off” with EMC, Pure Storage and HP 3PAR
  • Choosing HP 3PAR All-Flash because they saw a 5-to-10 fold performance improvement
  • Consolidating from 7 racks of EMC VMAX to a single rack of HP 3PAR StoreServ 7450 All-Flash Storage – and the rack they’re using is only 25% occupied
  • Transfering and integrating data with their existing Oracle applications was transparent

Read more about HP 3PAR Flash Arrays or download a free buyer’s guide for more information!

Upcoming Event: Software Quality Conference

software-quality-conference-logoThis week’s guest blog post is by Douglas F. Reynolds. Douglas is president of PNSQC (Pacific Northwest Software Quality Conference). PNSQC’s mission is to enable knowledge exchange between relevant parties within the software quality community, to produce higher quality software.  PNSQC provides opportunities to demonstrate, teach and exchange ideas on both proven and leading edge software quality practices. This year’s annual PNSQC conference will take place October 20 – 22, 2014 in Portland, Oregon.

Let’s review the reasons attending PNSQC 2014 can help to build your Bridges to Quality.

PNSQC 2014 provides for rich interaction with leaders in the industry. Six experts explore new ideas in software quality. Our Keynote Speakers are Jon Bach on Live Site Quality & Richard Turner discussing Balancing Agility and Discipline in Systems Engineering. Check out all of our speakers at Keynotes, Invited Speakers and Workshops.

At PNSQC 2014 you will also learn from over 60 presenters as they reveal solutions, successes and issues facing software quality. Ideas you can take back to the office as these are tales from the trenches that can be put to immediate use. The complete program is now available with the Technical Abstracts.

At PNSQC 2014 networking is a priority throughout the day. Over lunch we offer Birds of a Feather that give you an opportunity to hear from both the experts and your co-workers. The evening social events are collaborative with the local software organizations providing for time to mingle with the locals and your fellow attendees.

PNSQC is a non-profit and the savings are passed back to you; register early and save. Join us for PNSQC 2014, register now!