Selasa, 27 September 2016

Review: DB Networks Enhances Database Security with Machine Learning fifianahutapea.blogspot.com

Protecting databases takes more than just securing the perimeter, it also takes a deep understanding of how users and applications interact with databases, as well as knowing what databases are alive and breathing on the network. DB Networks aims to provide the intelligence, analytics and tools to bring insight into the database equation.

It’s no secret that database intrusions are on the rise, much to the chagrin of those responsible for infosec.  While many have focused on the notions of protecting the edge of the network and wrapping additional security around user access, the simple fact of the matter is that databases are the primary storehouses of private and sensitive information, and are often the true targets of intruders.

Recent events, such as the Target breach, the theft of security clearance information from the US OPM (Office of Personnel Management) and the theft of medical records from Anthem Healthcare, illustrates that protecting sensitive data is quickly becoming a losing battle. DB Networks is taking steps to turn the tide and bring victory to those charged with protecting databases.

The San Diego based company offers their DBN-6300 appliance and its virtual cousin, the DBN-6300v as founts of database activity, analytics, and discovery to give today’s security professionals an edge in the ever growing cyberattacks that are targeting databases. Those products promise to equip security professionals and database administrators with the tools that can identify and mitigate breaches before irreparable damage is done.

Case in point is the ubiquitous sql injection attack, which is far more common than most will admit to. SQL injection attacks have been around for more than ten years, and security professionals are more than capable of protecting against them. However, according to Neira Jones, the former head of payment security for Barclaycard, some 97 percent of data breaches worldwide are still due to an SQL injection somewhere along the line.

Taking a Closer Look at DBNetworks IDS-6300:

I recently had a chance to put DBNetworks IDS-6300 through its paces at the company’s San Diego Offices. The IDS-6300 is a physical appliance, built on Intel Hardware as a 2U rack mountable server. The device features four 10/100/1000 Ethernet Ports for data capture, one 10/100/1000 Ethernet admin port and one 10/100/1000 Ethernet customer service port, as well as a 480Gb SSD and 2Tb archival storage.

The device can be deployed by plugging it into either a span port or a tap port located at the core switch in front of the database servers. The idea is to place the device, logically ahead of the database servers, yet behind the application servers, so it can focus on SQL traffic. The IDS-6300 is managed via a browser based interface and supports the Chrome, Firefox and Safari browsers and will fully support IE in the near future.

I tested the device in a mock operational environment that included MS-SQL Databases with a demo version of a banking application that incorporated some known vulnerabilities. Setting up the device entailed little more than defining the capture ports and some very basic post installation items. Once configured to capture data, the next step was to identify databases.

Here, the IDS-6300 does an admirable job; it is able to automatically discover any databases that experience any traffic, even simple communications, such as a basic SQL statement. The device monitors for traffic 24/7 and continually checks for database activity.

That proves to be a critical element in the quest for securing databases – according to company representatives, many customers have discovered databases that IT was unaware operating in production environments. What’s more, the database discovery capability can be used to identify rogue databases or databases that were never shutdown after a project completed.

The database discovery information offers administrators real insight into what exactly is operating on the network, and what is vulnerable to attack – knowing that information can be the first step in mitigating security problems, before even venturing into traffic analysis and detection.

Never the less, the product’s real power comes into play when detecting SQL injection attacks. Instead of using caned templates or signatures, the IDS-6300 takes SQL attack detection to the next level – the device is able to learn what normal traffic is and record/analyze what that traffic accomplishes, and then builds a behavioral model.

Simply put, the device learns how an application communicates with a database, that information is used to create a behavioral model. Once learning is completed, the device uses multiple detection techniques to validate future SQL statements against expected behavior.  In practice, behavioral analysis proves immune to zero day attacks, newly scripted attacks and even old, recycled attacks, because all of those attacks fall out of the norms of expected behavior.

That behavioral analysis eliminates the need for signatures, black lists, white lists and other technologies that rely on pattern matching or static detection, which in turn reduces operational overhead and maintenance chores, almost converting SQL Injection attack monitoring into a plug and play paradigm.

When SQL Injection attacks occur, the IDS-6300 captures all of the traffic and transaction information around that attack. What’s more, the device categorizes, analyzes and presents the critical information about the attack so that administrators (or application engineers) can modify database code or incorporate firewall rules very quickly to remediate the problem.

Which brings up another interesting point, the IDS-6300 proves to be a good candidate for helping organizations improve application code. With many businesses turning to outsourcing and/or modifying off the shelf/open source software for application development, situations may arise where due diligence is not fully implemented and agile development projects may lead to introducing security flaws into application code.  That is not an uncommon problem,  at least according to Forrester Research’s Manatosh Das –  Poor application coding persists despite lessons learned.  Das claims that more than two-thirds of applications have cross-site scripting vulnerabilities, nearly half fail to validate input strings thoroughly, and nearly one-third can fall foul of SQL injection. Das adds security professionals and software engineers have known about these types of flaws for years, but they continue to show up repeatedly in new software code.

The IDS-6300 will quickly detect those newly introduced flaws and prevent poor programing practices from creating vulnerabilities, and then provide the information that is needed to fix those flaws.

The IDS-6300 offers another advantage to customers; it can help customers to consolidate databases by identifying what databases are active and what they are used for. That in turn can lead to companies combining databases and significantly reducing licensing and support costs. DBNetworks reports that one of their customers were able to reduce database licensing costs by over $1,000,000 by detecting and consolidating databases that were discovered by the IDS-6300

The IDS-6300 starts at $25,000 and is available directly from DBNetworks and authorized partners. For more information, please visit DBNetworks.com

 

 

Easy Way to Download

Kamis, 22 September 2016

Performance Management Brings New Found Value to IT fifianahutapea.blogspot.com

IT departments are always struggling to garner the praise they deserve. Yet, most organizations look upon IT as a necessary evil, one that is both expensive and somewhat obstructionist. However, nothing could be further from the truth, and IT departments the world over have pursued ideologies that highlight the value of the services they offer, while also demonstrating the importance that a properly executed IT management plan brings to the bottom line.

At last weeks Riverbed Disrupt event, GigaOM had a chance to talk with CIOs, as well as network managers that have demonstrated the value of IT with application performance management platforms and services.

John Green, Chief Information Officer at Baker Donelson, the 64th largest law firm in the country, offered some real world examples of how Application Performance Management (APM) and end user monitoring bring demonstrable value to an organizations IT department.

Green said “my staff supports some 275 different applications and more than 40 video conferencing rooms, which are in near constant operation.” Simply put, Green has come to know the importance of how reliable service and end acceptable user experience impacts the view that the firm’s 1,500 employees have of the IT department.

Green said “I was deploying the best technology money could buy, but my end-users still weren’t happy.” Green was looking at a situation where unhappy end users could create dire circumstances, which could impact the firms bottom line. Green added “I could go to management meetings and offer proof that the networks were up 99.9% of the time, and the that the databases and the email servers were delivering five-nine statistics of operation. Yet, my end users were still complaining.”

That is when Green had an epiphany, one that amounted to realizing network performance statistics and end user expectations rarely do not go hand in hand. Green said “We needed the ability to track the actual end-user experience, and then use that information to meet user expectations.”

Green found those much desired capabilities with SteelCentral Aternity, a product that offers the ability to monitor any application on any device to provide the actual user perspective, at least when it comes to responsiveness and performance. Green said “I have been an Aternity user for about seven years, and it completely transformed the way we relate to our end users.”

Nonetheless, Green said “Aternity is only one part the puzzle, although it provides valuable information, I would like to see the whole performance and experience picture on one pane of glass.”

That was a need that brought Green to the Riverbed Disrupt event. Riverbed recently purchased Aternity and is integrating the technology into their SteelCentral product line, looking to give its customers that single pane of glass view. Green was impressed with the direction Riverbed is taking with end-to-end monitoring and offered ““With the Riverbed and Aternity combination, there is now a mix of tools, that when combined into a single pane of glass, gives you total visibility across your network, from the servers to the circuits.”

While the Riverbed event was about new technologies, the real message was that by providing full monitoring capabilities to IT, staffers can better serve end-users and demonstrate the value of effective IT.

 

 

 

Easy Way to Download

Selasa, 20 September 2016

Riverbed Demonstrates the Importance of Full Stack Monitoring fifianahutapea.blogspot.com

Complete end to end monitoring has become increasingly important as enterprises strive to move from legacy data centers to the promise of software defined environments. After all, network managers encumbered by missing pieces of the network connectivity puzzle are likely to fail the transition to software defined solutions. An observation made abundantly clear at Riverbed’s Disrupt Event held in Manhattan last week. Overcoming the obstacles of connectivity has become Riverbed’s clarion call, and the company is now offering comprehensive solutions that not only ease the transition to software defined solutions, but also bring much more control and information to the network management realm.

Case in point is the company’s move to products that embrace the ideologies of a Software Defined Wide Area Network (SD-WAN), such as the company’s SteelConnect 2.0, an application-defined SD-WAN solution. In an interview with GigaOM, Joshua Dobies, vice president of product marketing at Riverbed, said “the new capabilities offered allow branch offices to directly access the cloud, all without having to backhaul everything back to the data center.” Dobies added “SD-WAN paves the way for complete digital transformation, allowing enterprises to quickly access the benefits of the cloud, while not discarding their existing investments in Data Center Technologies.”

Of course, the wholesale movement to the cloud means that technologies must transition to platforms that enable transformation, without incurring disruption. A situation that proves to be the sweet spot for end to end monitoring. With the addition of full network visibility, along with end user experience monitoring, network managers now have the ability to identify connectivity and performance problems on the fly, and can quickly address those problems with policies and tuning.

With the introduction of Riverbed’s next version of its SD-WAN offering, SteelConnect 2.0, the company is giving its customers greater visibility throughout the network, thanks to integration with Riverbed’s SteelCentral, it’s end-to-end performance management platform, and SteelHead products and Riverbed’s Interceptor offering, which gives SteelConnect greater scale for dealing with larger enterprise deployments. Riverbed Chairman and CEO Jerry Kennelly said “Today, we’re delivering a software-defined architecture for a software-defined world, and expanding that infrastructure deeper into the cloud and more broadly across all end users.”

In addition to the new SteelConnect 2.0 release, SteelCentral, it’s end-to-end performance management platform will now incorporate technology from Aternity, which Riverbed acquired in July. Aternity brings the ability to monitor application performance on physical and mobile end-user devices to the SteelCentral product line. The addition of the Aternity technology and extending visibility into the end-user devices give Riverbed a full portfolio of management offerings, according to Nik Koutsoukos, vice president of product marketing at Riverbed. “This brings full end-to-end management capabilities to those who need it most” Koutsoukos told GigaOM.

 

Easy Way to Download

Senin, 19 September 2016

Survey Reveals InfoSec is Doing it all Wrong! fifianahutapea.blogspot.com

While, “doing it all wrong” may be an exaggeration, no one can deny the fact that breaches are on the rise, and IT security solutions seem to be falling behind the attack curve. Yet, those looking to place blame may need only look in the mirror. At least that what a survey from cyber security vendor BeyondTrust is indicating.

BeyondTrust surveyed Over 500 senior IT, IS, legal and compliance experts about their privileged access management practices. The survey revealed some interesting trends, some of which should fall under the banner of “they should know better”. For example, only 14 percent regularly cycle their passwords, meaning that 86 percent of those surveyed are avoiding one of the top best practices for password and credential management. Adding insult to injury, only 3 percent of those surveyed monitor systems in real-time and have the capability to terminate a live session that may be indicative of a breach.

Simply put, the survey indicates that the majority of organizations need to do much more to protect systems from breaches. Many of which, could be easily avoided if the proper policies are put into effect. That said, the survey also revealed that 52 percent of respondents are not doing enough about known risks. In other words, they understand what the risks are, but have not deployed the technologies or crafted the policies to mitigate those risks.

Mitigating those risks should be one of the top jobs of InfoSec today, especially since most of the identified risks can be quickly resolved, using off the shelf products and by just applying best practices. BeyondTrust has developed some recommendations that InfoSec professionals can take to heart to lower risk and harden systems from breaches.

Those recommendations include:

  • Be granular: Implement granular least privilege policies to balance security with productivity. Elevate applications, not users.
  • Know the risk: Use vulnerability assessments to achieve a holistic view of privileged security. Never elevate an application’s privileges without knowing if there are known vulnerabilities.
  • Augment technology with process: Reinforce enterprise password hygiene with policy and an overall solution. As the first line of defense, establish a policy that requires regular password rotation and centralizes the credential management process.
  • Take immediate action: Improve real-time monitoring of privileged sessions. Real-time monitoring and termination capabilities are vital to mitigating a data breach as it happens, rather than simply investigating after the incident.
  • Close the gap: Integrate solutions across deployments to reduce cost and complexity, and improve results. Avoid point products that don’t scale. Look for broad solutions that span multiple environments and integrate with other security systems, leaving fewer gaps.

 

In an interview with GigaOM, Kevin Hickey, President and CEO at BeyondTrust, offered “Companies that employ best practices and use practical solutions to restrict access and monitor conditions are far better equipped to handle today’s threat landscape.”

Hickey added “The survey proved critical for helping BeyondTrust to better identify threats based upon privilege management, and also helped us evolve our product offerings to make privilege management a much easier process for security professionals.”

Hickey’s statements were validated by the launch of some new product offerings, which are aimed at bringing privilege management ease to those charged with IT security. The two new offerings are the BeyondTrust Managed Service Provider (MSP) Program and an Amazon Machine Image (AMI) of BeyondInsight available on the Amazon Marketplace. Those products are geared to prevent breaches that involve privileged credentials with deployments that include on premise solutions, virtual device solutions, as well as in the Cloud or from a Managed Services Provider.

Easy Way to Download

Jumat, 16 September 2016

Hyper Convergence Poses Unique Challenges for SAN Technologies fifianahutapea.blogspot.com

With the move towards hyper-convergence in full swing, many organizations are faced with the challenge of moving their massive data stores into virtualized environments.  A situation that came to the forefront of discussion at VMworld 2016, where all things related to hyper-convergence were discussed ad nauseam.

Even so, many were still left wondering if it was even possible to have traditional storage technologies, such as SAN and NAS, effectively coexist in an environment that was transitioning into a hyper-converged entity. What’s more, the uncertainties of transition, driven by potential communications problems, performance issues and incompatibilities could force wholesale, expensive upgrades to support the move to hyper-convergence. An issue many network managers and CIOs would love to avoid.

Simply put, the move towards hyper-convergence, which promises improved efficiencies and reduced operating expenses, can be derailed by the high costs of transitioning to virtualized SANs. An irony worth noting. Never the less, those challenges have not stopped VMware Virtual SAN from becoming the fastest growing hyper-converged solution with over 3,000 customers to date. That said, there is still room for improvement, such as helping VMware Virtual SAN support even more workloads, and that is exactly where vendor Primary Data comes into play.

At VMworld 2016, Primary Data announced the availability of the company’s DataSphere platform, which brings a storage agnostic platform to virtualized environments. In other words, Primary Data is able to tear down storage silos, without actually disrupting the configuration of those silos. It accomplishes that by creating a virtualization platform that is able to mask the individual storage silos and present them as a unified, tiered storage lake, which is driven by policies and offers almost infinite configuration options.

Abstracting data from storage hardware is not a new idea. However, Primary Data goes far beyond what companies such as FalconStore and StoneFly bring to the world of hyper-convergence.  For example, DataSphere offers a single plane of glass management console, which unifies the management of across the various storage tiers, regardless of the storage type. What’s more, the platform goes beyond the concept of a SLA (Service Level Agreement) and introduces a new concept, aptly abbreviate as SLO (Service Level Objective). Primary Data’s Kaycee Lai, an executive with the company, explained to GigaOM that “SLOs are business objectives for applications. They define a commitment to maintain a particular state of the service in a given period. For example, specific write IOPS, read IOPS, latency, and so forth, to maintain for each application. SLOs are measurable characteristics of the SLA.”

Lai added “DataSphere will support DAS, NAS, and Object as storage types. Block level support for SAN will follow in the next release.” One of the key elements offered by the platform is the ability to work with storage tiers, without the disruption of having to rebuild storage silos. Lai added “Tiers are a logical concept in DataSphere. Tiers are simply a class of storage that is mapped to a particular SLO. The notion of having multiple tiers is not as important as having multiple objectives requiring the specific storage to meet those objectives. Customers can create as many objectives as their business requires.”

In the quest to make hyper-convergence common place, Primary Data smooths the bumpy storage path with several abilities, which the company identifies as:

  • Adapt to continually changing business objectives with intelligent data mobility.
  • Scale performance and capacity linearly and limitlessly with unique out-of-band architecture.
  • Reduce costs through increased resource utilization and simplified operations.
  • Simplify management through global and automated policies.
  • Accelerate upgrades of new solutions such as VMware vSphere 6 with seamless migration using existing infrastructure.
  • Reduce application downtime with automated non-disruptive movement of data.
  • Deliver a full range of data services across all applications in the data center.

 

 

Easy Way to Download

Selasa, 13 September 2016

wpe test post fifianahutapea.blogspot.com

wp test post

Easy Way to Download

Research Proves that a Customer Centric Approach Can Bring Unforeseen Value fifianahutapea.blogspot.com

Service management vendor, Servicenow recently commissioned Intergram Research to conduct a survey, which dispels some of the common myths around service enablement, a realization that Servicenow has long prophesied about. In an interview with GigaOM, Holly Simmons, Sr. Director, Global Product Marketing, Customer Service Management, said “the survey found that the companies that excel at customer service are 127% more likely to enable their customer service agents to enlist the help of different parts of the organization in real-time.”

Or more simply put, by transforming customer service into a team sport, organization can better meet the needs of their customers, in a much shorter time frame. However, that transformation requires more than just basic intention, it requires a platform that can tear down the silos that surround people and systems, which will ultimately deliver the ability to share resolutions and improve customer services across the whole services spectrum.

That ideology is backed by the findings of Intergram Research, which surveyed senior managers in customer service roles at 200 U.S. enterprises with at least 500 employees.

The Survey Results:

The survey revealed three characteristics that separate the companies with the very best customer service from those that struggle. Companies identified as top-tier are:

  • More collaborative. They are more likely to have enabled their customer service agents to engage the help of different parts of the organization when addressing a customer’s problem.
  • Better problem-solvers. Customer service leaders are also more likely to be able to resolve the root cause of a customer’s problem (a crucial component of closing the resolution gap).
  • Self-service providers. And finally, these top-tier organizations are more likely to offer self-service options for common requests, freeing them up to focus on more strategic issues.

While for some, the above may amount too little more than just common sense, the fact of the matter is that many organizations have created silos around their various customer service elements, which hampers collaboration and adds to the time it takes to solve a customer’s problems. What’s more, those silos add hidden expenses to already overtaxed support resources, meaning that the collective knowledge of customer support must be relearned during most any new interaction.

It is those inefficiencies that lead to customers fleeing from specific vendors, especially in the realm of IT. If a customer or client cannot get a quick resolution to a problem, then they may take their business else ware.

Simmons adds “Resolving a customer’s issue quickly and effectively requires real-time collaboration, coordination, and accountability among customer service, engineering, operations, field services and other departments. But that’s just not happening at more than half of the companies surveyed. Customer service still sits on an island without a bridge to other departments, partners, and customers. That slows the resolution process, and frustrates both customers and the agents trying to help them.”

The survey also illustrated the primary problems facing organizations seeking to improve customer service include the difficulty in connecting all service processes, further hampered by service departments being siloed, along with a lack of automation. Those three factors impacted more than 50% of those surveyed, and when viewed as single issues, proved to be a primary barrier to successfully customer service transformation.

Call to Action:

While the survey highlights the both the problems and solutions surrounding agile customer service, transformation can only take place if certain ideologies are upheld. According to Servicenow, organizations that treat customer service as a “team sport” and engage the right people from relevant departments to solve problems are in a better position to proactively address the underlying reasons for customer calls. They also empower their customers to quickly answer their own questions–through self-service portals, knowledge bases, and communities–further reducing the need to interact with customer service agents. The more sophisticated customer service organizations aspire to the ideal of “no-service” by combining these practices to help eliminate the reasons for customer calls in the first place.

 

Easy Way to Download

Announcing the Full Keynote Panelist Lineup at Gigaom Change fifianahutapea.blogspot.com

Gigaom Change 2016 Leader’s Summit is just one week away, September 21-23 in Austin. The event will take place over two and a half days of keynote panels with a lineup of speakers that are visionaries making R&D and proof of concept strategic investments to bring concept to reality, forging multi-billion dollar companies along the way.

Three top industry experts in the following industries will highlight the current impact these innovations are having, then pivot toward what will be possible in the future: Robotics, AI, AR/VR/MR, Human-Machine Interface, Cybersecurity, Nanotechnology and 3D+ Printing.

Keynote panelists include leading theorists and visionaries like Robert Metcalfe, Professor of Innovation, Murchison Fellow of Free Enterprise at the University of Texas; Rob High, IBM Fellow, Vice President and CTO, IBM Watson. It also includes practitioners who are actively implementing these technologies within companies; like Shane Wall, CTO and Global Head HP Labs; Melonee Wise, CEO Fetch Robotics; Stan Deans, President of UPS Global Logistics and Distribution; and Rohit Prasad, Vice President and Head Scientist, Amazon Alexa. We will hear from Sapient about AI, IBM about nanotech, Softbank about robots and a wide range of other innovators creating solutions for visionary enterprises.

We couldn’t be more excited to introduce you to the full lineup of this extraordinary group.

Robert MetcalfeOur opening night keynote speaker will be internet/ethernet pioneer Robert Metcalfe, Professor of Innovation, Murchison Fellow of Free Enterprise at The University of Texas.
Jacquelyn Ford Morie Ph.D.Speaking on the VR/AR/MR panel is Jacquelyn Ford Morie Ph.D., Founder and CEO of All These Worlds LLC and Founder & CTO of The Augmented Traveler Corp. Dr. Jacquelyn Ford Morie is widely known for using technology such as Virtual Reality to deliver meaningful experiences that enrich people’s lives.
Rodolphe GelinDiscussing the subject of robotics is Rodolphe Gelin, EVP Chief Scientific Officer, SoftBank Robotics. Gelin has worked for decades in the field of robotics, focusing primarily on developing mobile robots for service applications to aid the disabled and elderly. He heads the Romeo2 project to create a humanoid personal assistant and companion robot.
Manoj SaxenaOn the artificial intelligence panel, Manoj Saxena, Executive Chairman of CognitiveScale and a founding managing director of The Entrepreneurs’ Fund IV, a $100m seed fund, will address the cognitive computing space.
Dr. Heike RielSpeaking on the subject of nanotechnology is Dr. Heike Riel, IBM Fellow & Director Physical Sciences Department, IBM Research. Dr. Riel’s work focuses on advancing the frontiers of information technology through the physical sciences.
Mark RolstonAddressing human-machine interface is Mark Rolston, Cofounder & Chief Creative Officer, argodesign. Mark Rolston is a renowned designer who focuses on groundbreaking user experiences and addresses the modern challenge of design beyond the visible artifact – in the realm of behavior, the interaction between human and machine, and other unseen elements.
Rob HighDiscussing the subject of artificial intelligence is Rob High, IBM Fellow, Vice President and Chief Technology Officer of IBM Watson. Rob High has overall responsibility to drive Watson technical strategy and thought leadership.
Dr. Michael EdlemanAddressing nanotechnology is Dr. Michael Edelman, Chief Executive Officer of Nanoco. Through his work with Nanoco, Dr. Edelman and his team have developed an innovative technology platform using quantum dots that are set to transform lighting, bio-imaging, and much more.
Melonee WiseAs CEO of Fetch Robotics — delivering advanced robots for the logistics industry — Melonee Wise will speak to the state of robotics today and the need and potential for the entire industry to transform to meet demand for faster, more personalized logisitics/ops delivery using “collaborative robotics”.
Shane WallAs Chief Technology Officer and Global Head of HP Labs, Shane Wall drives the company’s technology vision and strategy, new business incubation and the overall technical and innovation community. Joining our 3D+ Printing panel, Wall will provide real insights into how 3D+ printing is going to transform and disrupt manufacturing, supply chains, even whole economies.
David RoseTaking a place on the Human-Machine interface panel is David Rose, an award-winning entrepreneur, author, and instructor at the MIT Media Lab. His research focuses on making the physical environment an interface to digital information.
Stan DeansJoining the 3D+ Printing panel is Stan Deans, President of UPS Global Logistics and Distribution. Deans has been instrumental in building UPS’s relationship with Fast Radius by implementing its On Demand Production Platform™ and 3D Printing factory in UPS’s Louisville-based logistics campus. By building this disruptive technology into its supply chain models, UPS is now able to bring new value to manufacturing customers of all sizes.
Rohit PrasadAddressing human-machine interface is Rohit Prasad, Vice President and Head Scientist, Amazon Alexa, where he leads research and development in speech recognition, natural language understanding, and machine learning technologies to enhance customer interactions with Amazon’s products and services.
Liam QuinnJoining our AR/VR/MR panel, Liam Quinn is VP, Senior Fellow & CTO for Dell, responsible for leading the development of the overall technology strategy. Key passions are xReality where Quinn drives the development and integration of specific applications across AR & VR experiences, as well as remote maintenance, gaming and 3D applications.
Niloofar RaziNiloofar Razi is SVP & Worldwide Chief Strategy Officer for RSA. As part of the Cybersecurity panel she brings more than 25 years experience in the technology and national security sectors, leading corporate development and implementation of investment strategies for billion dollar industries.
Michael PetchMichael Petch is a renowned author & analyst whose expertise in 3D+ printing will bring deep insights to advanced, additive manufacturing technologies on our Nanotechnology panel. He is a frequent keynote speaker on the economic and social implications of frontier technologies.
Josh SuttonJosh Sutton is Global Head, Data & Artificial Intelligence for Publicis.Sapient. As part of the AI panel Josh will discuss how to leverage established and emerging artificial intelligence platforms to generate business insights, drive customer engagement, and accelerate business processes via advanced technologies.
Melissa MormanJoining our AR/VR/MR panel is Melissa Morman, Client Experience Officer, BuilderHomesite Inc. Morman is a member of the original founding executive team of BHI/BDX (Builders Digital Experience) and advises top executives in homebuilding, real estate, and building products industries on the digital transformation of their business.
John McClurgJoining our Cybersecurity panel is John McClurg, VP & Ambassador-At-Large, Cylance. McClurg was recently voted one of America’s 25 most influential security professionals, sits on the FBI’s Domestic Security (DSAC) & National Security Business Alliance Councils (NSBAC), and served as the founding Chairman of the International Security Foundation.
Mark HatfieldSpeaking on our Cybersecurity panel is Mark Hatfield, Founder and General Partner of Ten Eleven Ventures, the industry’s first venture capital fund that is focused solely on investing in digital security.
Mark HalversonSpeaking on our robotics panel is Mark Halverson, CEO of Precision Autonomy whose mission is to make unmanned and autonomous vehicles a safe reality. Precision Autonomy operates at the intersection of Artificial Intelligence and Robotics employing crowdsourcing and 3 dimensional augmented reality to allow UAVs and other unmanned vehicles to operate more autonomously.
James V HartSpecial guest James V Hart, is an award-winning and world-renowned Hollywood screenwriter whose film credits include Contact, Hook, Bram Stoker’s Dracula, Lara Croft: Tombraider, August Rush, Epic and many more projects in various stages of development, including Kurt Vonnegut’s AI fueled story Player Piano. With us he’ll discuss the impact of storytelling on how we’ve formed our views of the future.

Gigaom Change 2016 Leader’s Summit is just one week away, September 21-23 in Austin, but there are still a few tickets available for purchase. Reserve your seat today.

Easy Way to Download

Senin, 12 September 2016

Fluke briefing report: Closing the gap between things and reality fifianahutapea.blogspot.com

The Internet of things is great, right? I refer the reader to the vast amount of positive literature that is washing through the blogosphere, no doubt being added to even as I write this. At the same time, plenty of people are pointing out the downsides — data security for example, more general surveillance issues or indeed the potential for any ‘smart’ object to be hacked.

All well and good, in other words it’s a typical day in techno-paradise. But the conversation itself is skewed towards the ability to smarten up — that is, deliver new generations of devices that have wireless sensors built in. What of the other objects that make up 98% (I estimate) of the world that we live in?

Enter companies such as Fluke, which earned its stripes over many years of delivering measurement kit to engineers and technicians, from multimeters to higher-end stuff such as thermal imaging and vibration testing. While such companies might not have a high profile outside of operational circles, they are recognising the rising tide of connectedness and doing something about it in their own domains.

In Fluke’s case, this means manufacturing plants, construction sites and other places where the term ‘rugged’ is a need to have, not a nice to have. Such sites have plenty of equipment that can’t simply be replaced with a smarter version, but which nonetheless can benefit substantially from remote measurement and management.

The current consequence, Fluke told me in a recent briefing about their let’s connect-the-world platform (snappily titled the “3500 FC Series Condition Monitoring System”), is that failures are captured after the event. “We have more than 100,000 pieces of equipment and the reliability team can only assess so many. We’ve never been able to have maintenance techs collect data for us, until now,” reports a maintenance supervisor at one US car manufacturer.

That Fluke are upbeat about the market opportunity nearly goes without saying — after all, there really is a vast pool of equipment that can seriously benefit from being joined up — but the point is, the model goes as wide as there are physical objects to manage. And equally there’s a ton of companies like Fluke that are smartening up their own domains, making a splash in their own jurisdictions. Zebra’s smart wine rack may just have been a proof of concept, but give it five years and all wine lovers will have one.

Inevitably, there will be a moment of shared epiphany when all such platforms start integrating together, coupled with some kind of Highlander-like fight as IoT integration and management platforms look to knock the rest out of the market. I’m reminded of the moment, back in the early 90’s, when telecoms manufacturers adopted the HP OpenView platform en masse, leading to possibly the dullest Interop Expo on record.

Yes, the future will be boring, as we default to using stuff that we can remotely monitor and control. As consumers we may still like using ‘dumb stuff’ but for businesses that interface with the physical world, to do so would make no commercial sense. Equally however, such a dull truth will provide a platform for new kinds of innovation.

I could postulate what these might be but the Law of Unexpected Consequences has the advantage. All I do know is, it won’t be long at all before what is seen as exceptional — the ability to monitor just about everything — will be accepted as the norm. At that point, and to make better use of one of Apple’s catchphrases, everything really will be different.

Easy Way to Download

Rabu, 07 September 2016

Welcome to the Post-Email Enterprise: what Skype Teams means in a Slack-centered World fifianahutapea.blogspot.com

Work technology vendors very commonly — for decades — have suggested that their shiny brand-new tools will deliver us from the tyranny of email. Today, we hear it from all sorts of tool vendors:

  • work management tools, like Asana, Wrike, and Trello, built on the bones of task manager with a layer of social communications grafted on top
  • work media tools, like Yammer, Jive, and the as-yet-unreleased Facebook for Work, build on social networking model, to move communications out of email, they say
  • and most prominently, the newest wave of upstarts, the work chat cadre have arrived, led by Atlassian’s Hipchat, but most prominently by the mega-unicorn Slack, a company which has such a strong gravitational field that it seems to have sucked the entire work technology ecosystem into the black hole around its disarmingly simple model of chat rooms and flexible integration.

Has the millennium finally come? Will this newest paradigm for workgroup communications unseat email, the apparently undisruptable but deeply unlovable technology at the foundation of much enterprise and consumer communication?

Well, a new announcement hit my radar screen today, and I think that we may be at a turning point. In the words of Winston Churchill, in November 1942 after the Second Battle of El Alamein, when it seemed clear that the WWII allies would push Germany from North Africa,

Now this is not the end. It is not even the beginning of the end. But it is, perhaps, the end of the beginning.

And what is this news that suggests to me we may be on the downslope in the century-long reign of email?

Microsoft is apparently working on a response to Slack, six months after the widely reported termination of discussions of acquisition. There has been a great deal of speculation about Microsoft’s efforts in this area, especially considering the now-almost-forgotten acquisition of Yammer (see Why Yammer Deal Makes Sense, and it did make sense in 2012). However, after that acquisition, Microsoft — and especially Bill Gates, apparently — believed they would be better off building Slackish capabilities into an existing Microsoft brand. But, since Yammer is an unloved product inside of the company, now, the plan was to build these capabilities into something that the company has doubled down on. So now we see Slack Teams, coming soon.

Microsoft may be criticized for maybe attempting to squish too much into the Skype wrapper with Skype Teams, but we’ll have to see how it all works together. It is clear that integrated video conferencing is a key element of where work chat is headed, so Microsoft would have had to come up with that anyway. The rest of the details will have to wait for actual hands on inspection (so far, I have had only a few confidential discussions with Microsofties).

My point is that we are moving into a new territory, a time where work chat tools will become the super dominant workgroup communications platform of the next few decades. This means that the barriers to widespread adoption will have to be resolved, most notably, work chat interoperability.

Most folks don’t know the history of email well enough to recall that at one time email products did not interconnect: my company email could not send an email to your company email. However, the rise of the internet and creation of international email protocols led to a rapid transition, so that we could stop using Compuserve and AOL to communicate outside the company.

It was that interoperability that led to email’s dominance in work communications, and similarly, it will take interoperability of work chat to displace it.

In this way, in the not-too-distant future, my company could be using Slack while yours might be using Skype Teams. I could invite you and your team to coordinate work in a chat channel I’ve set up, and you would be able to interact with me and mine.

If the world of work technology is to avoid a collapse into a all-encompassing monopoly with Slack at the center of it, we have to imagine interoperability will emerge relatively quickly. Today’s crude integrations — where Zapier or IFTTT copy new posts in Hipchat to a corresponding channel in Slack — will quickly be replaced by protocols that all competitive solutions will offer.

We’ll have to see the specifics of Skype Teams, and where Facebook at Work is headed. Likewise, all internet giants — including Apple, Google, and Amazon — seem to be quietly consolidating their market advantages in file sync-and-share, cloud computing, social networks, and mobile devices. Will we see a Twitter for Work, for example, after an Amazon acquisition? Surely Google Inbox and Google+ aren’t the last work technologies that Alphabet intends for us?

But no matter the specifics, we are certainly on the downslopes of the supremacy of email. We may have to wait an additional 50 years for its last gasping breath, but we’re now clearly in the chat (and work chat) era of human communications, and there’s no turning back.

Easy Way to Download

Selasa, 06 September 2016

Is There Life After Dell? SonicWALL Thinks So! fifianahutapea.blogspot.com

When SonicWALL was acquired by Dell back in 2012, many wondered how SonicWALL would fare under the auspices of industry giant Dell. That said, SonicWALL managed to maintain market share in its core SMB business sector, and start making inroads in to the large, distributed enterprise sector. Nonetheless, when Dell decided to sell off its software assets, along with SonicWALL to private equity firms, many began to wonder once again what that meant for SonicWALL.

SonicWALL provided the answers to those queries at the company’s PEAK 2016 event, which was held last week in Las Vegas. The primary topics of discussion focused on applying SonicWALL technology and what the future holds for SonicWall, its partners and customers.

Along with the requisite product announcements, SonicWALL also hosted several educational sessions bringing cloud security to the forefront of partners’ minds, as well as the challenges created by the ever growing IoT infrastructure spreading through enterprises today.

SonicWALL offered a strong message that there is life after Dell, and that the company will thrive and grow despite the forced separation from Dell. For example, SonicWALL is in the process of strengthening the company’s channel programs to better support both its partners and end customers. What’s more, the company also announced its Cloud GMS offering, which is aimed at simplifying management, enhancing reporting, and reducing overhead. What’s more, Cloud GMS brings cloud based management, patching and updating to the company’s army of partners, providing them with a critical weapon in the battle against hosted security vendors, and those plying “firewalls in the cloud” as a means to an end.

The importance of the forthcoming Cloud Global Management System (GMS) cannot be understated. SonicWALL aims to eliminate the financial, technical support and system maintenance hurdles that are normally associated with traditional firewalls, transforming what was once an isolated security solution into a cloud managed security platform. A capability that will prove important to both customers and partners.

For partners, Cloud GMS brings a unique, comprehensive, low cost monthly subscription to the table, which is prices out based upon the number of firewalls under management. That ideology will allow partners to become something akin to a hosted services security provider, shifting customer expenses to OpEx, instead of CapEx.

SonicWALL Cloud GMS solution Offers:

  • Governance: Establishes a cohesive approach to security management, reporting and analytics to simplify and unify network security defense programs through automated and correlated workflows to form a fully coordinated security governance, compliance and risk management strategy.
  • Compliance: Rapidly responds and fulfills specific compliance regulations for regulatory bodies and auditors with automatic PCI, HIPAA and SOX reports, customized by any combination of auditable data.
  • Risk Management: Provides ability to move fast and drive collaboration and communication across shared security framework, making quick security policy decisions based on time-critical and consolidated information for higher level security efficacy.
  • Firewall management: MSPs will be able to leverage efficient, centralized management of firewall security policies similar to on-premises GMS features, including customer sub-account creation and increased control of user type and access privilege settings.
  • Firewall reporting: Real-time and historical, per firewall, and aggregated reporting of firewall security, data and user events will give MSPs greater visibility, control and governance while maintaining the privacy and confidentiality of customer data.
  • Licensing management: Seamless integration between GMS and MySonicWALL interfaces will allow users to easily and simply log into Hosted GMS to organize user group names and memberships, device group names and memberships, as well as adding and renewing subscriptions and support.

 

Easy Way to Download

Senin, 05 September 2016

Work Processing: Coming soon to a ‘Doc’ near you fifianahutapea.blogspot.com

Easy Way to Download

Book review: Silicon Collar: an optimistic perspective on humans, machines and jobs fifianahutapea.blogspot.com

A dilemma lurks in the pages of Vinnie Mirchandani’s book on the future of work. “The interviews I conducted show practitioners in a wide array of industries using technology to improve productivity and product quality. They were pragmatic and generally optimistic,” he says. “I also found a contrasting sense of pessimism in the academic and analyst world about ‘jobless futures.’ ”

As one in the “academic and analyst” community who finds himself in an apparent minority, I jumped at the opportunity to read what optimism Mirchandani had to offer. Truth be told, there’s plenty of it for a relatively simple, yet profound reason: that humanity across the globe sees little reason to give up some of the things that it sees as valuable.

A salutary tale comes from the world of sport — basketball, specifically, where teams such as California’s Golden State Warriors are using every technology they can get their hands on to monitor performance in training and during games, to detect and pre-empt injuries, to plan seasons and indeed, careers for players.

Of course, technology can only take things so far: as comments Kirk Lacob, Assistant General Manager for the Warriors, “The reality is that we can’t influence results completely—and we are a results business. But if we can push and pull the probabilities, we can hope to have a better outcome.” So, yes, technology can augment our capabilities without detracting from them.

But beyond this is a broader picture, about humanity’s relationship with sport. We can argue that it ain’t what it used to be, when kids with sneakers would throw hoops in some godforsaken, dusty back lot. Equally however, however augmented and scientific it becomes, it remains a bunch of people with a ball. For reasons beyond anyone’s ken, that remains interesting.

The same principle can be applied to so many domains, from wine growing to white collar areas such accountancy. Yes, of course many jobs can be automated — not least the 3 D’s of dull, dirty, and dangerous such as in garbage collection or construction. And it is an open goal of a debating point to say that people in these positions might require some kind of retraining.

But are we, as suggests Vivek Wadhwa, Fellow at the Rock Center for Corporate Governance at Stanford University, heading towards a catastrophe? “We won’t be able to retrain the workers who lose today’s jobs. They will experience the same unemployment and despair that their forefathers did,” he suggests, arguing against the notion of a luddite fallacy.

Such ’despair’ is inevitable, a consequence of the technology-driven income and value disparity that looms in the near distance argue many. Others suggest that such dystopian views are cyclic: “About every 50 years, almost like clockwork, we have the collective experience that the sky is falling. Nothing could be further from the truth,” says analyst Denis Pombriant.

Building on this theme, Mirchandani chooses to look to the past to help understand the future. Citing the Law of Unintended consequences, he makes the point that while we do not know what the jobs will be, there will be plenty of them — “Review FastCompany’s projection of jobs in the next decade to include Urban Farmers, Neuro-Implant Technicians and Virtual Reality Experience Designer,” he says.

There’s a deeper point in the book, that goes way beyond a pantomime “Oh yes there will, oh no there won’t” argument. Simply put (though it is explored in detail), it is that technology doesn’t cause inequality, but exploitation does. As new ways of working become possible, we owe it to ourselves to ensure that they are delivered to serve the many, not the few.

There’s enough in this thoroughly researched and readable book to back the view that automation can sit alongside artisanship, to coin a phrase, both are ‘better together’. Beyond this however, it is the exploitation argument which I found most compelling, and most needy to be addressed by policy and governance. We will only have a bright future for work if we choose to make it so, or, as the commenter Kirby suggests on one of my previous articles, “Humans will have much bigger problems on their hands than worry[ing] about having a job.”

P.S. In the course of reviewing this book, I discovered my article above was mentioned. Which was nice.

Easy Way to Download

Jumat, 02 September 2016

Counteracting APTs with a Fine-tuned SIEM Solution fifianahutapea.blogspot.com

Even though not a prevailing type of cyber attacks, advanced persistent threats (APTs) are definitely the most devastating ones. Just like a sudden volcano eruption that’s been  slowly surging underneath, an ATP may stay invisible for many months but finally result in serious financial damage, ruining companies’ reputation and even lead to human victims as it happened after the scandalous Ashley Madison data breach.

The annual cyber threat report M-Trends 2016 by Mandiant stated that the average number of days in 2015 during which organizations were compromised before they discovered the breach (or were notified about the breach) was 146. To make things even worse, security specialists reveal the majority of APTs by accident, which means that APTs’ real lifecycle is limited only by the power of vigilance. So is the battle with APTs really a matter of luck? Or is there anything to detect them before they wreck an organization’s assets?

Why are traditional tools no good?

With APTs, you may think that organizations are too much negligent about their security and take inadequate security measures. In reality, targeted entities usually adopt the whole range of security tools from standard firewalls and antiviruses to sophisticated anti-malware products. The problem is that these traditional tools aren’t able to withstand an APT attack, leaving a great number of blind spots in an enterprise’s infrastructure.

For example, firewalls as an essential part of network security can close unnecessary ports and block unsolicited incoming network traffic. Their advanced versions can even partially protect against DDoS attacks. But they definitely can’t detect malicious users, analyze packets containing malware and obviously they cannot deal with attacks that don’t go through them. Due to traditional firewalls’ limited functionality, most organizations supplement them with intrusion prevention systems (IPS) that allow to examine network traffic flows, detect and prevent vulnerability exploits. However, IPS also have their limitations as they are helpless against client-side application attacks. 

Moreover, managing an array of security tools is difficult and costly, as you need to acquire multiple software licenses and hire specialists to deal with each particular piece of software. It’s also impossible to manually correlate data from multiple systems in order to detect and respond to proliferating attacks. And, finally, scattered solutions cannot ensure a 360° view of a company’s IT environment, which finally results in loopholes that let hackers in.

At the same time, today’s security software market offers advanced security information and event management (SIEM) solutions that are able to replace multiple scattered solutions. Even if not considered as the ultimate remedy against APTs, SIEM systems might assist security officers at different stages of an attack.

Learning from life lessons: The case of Carbanak attacks

To get all armed for possible attacks, it’s useful to analyze previous mistakes. In the history of security breaches, APTs have a ’track record’ of calamitous intrusions. Among them there are a series of attacks by the Carbanak group that targeted more than 100 banks and other financial institutions in 30 nations (the US named the second biggest target), which made it one of the largest bank thefts ever.

Started out in August 2013, this sophisticated hacking gang was first publicly disclosed only in 2015 when the total gain already reached $1 billion. To stay unnoticed and learn every bank inside out, attackers used a whole range of tactics from spear phishing to latent watch, stealing money in modest batches. The theft was revealed accidentally, after examining one ATM’s strange behavior. However, disclosure didn’t stop the Carbanak hackers from their shady affairs: a new series of attacks were already registered in 2016. This time, the gang aims to double down the previous catch.
But what if victims had a fine-tuned SIEM solution?

As the banks were unprepared for these attacks and had no relevant solutions in place to detect the APTs, we decided to take this case as an example and illustrate how a fine-tuned SIEM solution, such as IBM QRadar, could help to reveal the Carbanak advanced persistent threats.

Malware Infection

According to the publicly available details of the attack, the hackers got access to bank employees’ computers through opportunistic malware. IBM Security QRadar QFlow Collector could pinpoint a malware infection by ensuring constant monitoring of the traffic going in and out of an organization. The tool processes sessions and flow information from external sources in such formats as QFlow, NetFlow, SFlow, JFlow and sessions from Packeteer, which allows to baseline network traffic and implement anomaly rules, as well as to build up specific correlation rules to detect the following:

  • communications with known botnet control centers and malicious IP addresses. This information can be subscribed (IBM X-Force) or integrated with SIEM from open sources.

  • communications with unusual and potentially malicious countries and regions

  • communications via unusual ports (e.g. 6667/IRC)

  • communications containing specific payloads (e.g. bot control commands), which is possible with IBM Security QRadar QFlow Collector’s functionality.


Spear Phishing

Once the attackers gained access to employees’ computers, they started a massive spear phishing campaign that was very hard to identify. Indeed, a SIEM solution can hardly distinguish an infected email message originating from a legitimate email account (a workstation with malware) from a legitimate email. However, if the email server is connected to a SIEM solution as a log source, it’s possible to detect the following abnormalities:

  • an enormous amount of messages sent from the same account within a short time 

  • email messages sent in non-business hours from a corporate account

  • a huge number of messages with the same subject to different mailboxes


The advanced correlation with physical security controls also allows detection of mailouts from users before their check-in through a physical security gate.

Privilege escalation and deeper reconnaissance

Systematic spare phishing coupled with malware infection allowed the gang to continue their attack through privilege escalation and deeper reconnaissance that are typical for all APTs.  

Privilege escalation could be monitored with a fine-tuned SIEM solution with the following:

  • audit enabled and properly configured on workstations

  • log data collected from workstations and sent to a SIEM

  • user accounts and roles mapped in a SIEM solution using information from LDAP/AD


In such a scenario, any user with no Admin role logging in with administrative privileges would trigger an alert in a SIEM solution.
Moreover, most of SIEM solutions contain out-of-the-box reconnaissance detection correlation rules that can be fine-tuned to minimize false-positives. In our case, a deeper reconnaissance originating from an internal corporate network could be identified if firewalls were sending access logs to a SIEM solution.

Latent watch

To better understand the internal systems, the hackers assigned operators to work with video- and screen-capture feeds grabbed and transmitted to the attackers with the previously injected malware.

The unusual traffic analysis based on anomaly rules would detect video and screen capturing activities since video translation produces a lot of traffic that could be caught by IBM Security QRadar QFlow Collector.

Infection of computers attached to ATMs

The Carbanak gang successfully infected computers attached to ATMs in order to make the machines dispense cash. In case if compromised administrative accounts were used to spread infection, a SIEM solution would be able to alert the security personnel about the following:

  • a logged admin user account didn’t belong to the attacked server’s support team (mapping with LDAP/AD)

  • a specific admin user account was logged in to many servers in a short time.

Additionally, an advanced correlation with Identity and Access Management solutions and Ticketing systems would allow to detect cases when an admin user was logging to the system with no appropriate ticket or IAM allowance.
Compromise of internal databases and creation of fraudulent accounts
During the attacks, hackers manipulated Oracle databases to open payment or debit card accounts at the same bank or to transfer money between accounts using the online banking system. Normally, all activity related to creating new accounts should pass through a validation procedure. Depending on such a procedure and tools used for validation, this information could be integrated with a SIEM solution to alert on unexpected account creation. If there’s no such validation in place, each new account creation could be alerted and investigated by a security analyst.

  • A SIEM consultant could help a bank to get reports on business-critical data modification by doing the following:
  • enabling Oracle Fine Grained Auditing (FGA) or a similar audit mechanism 

  • compiling and integrating a list of approved database users. This would allow to detect data modification performed by unapproved accounts, which could be alerted to by a SIEM solution.


Abuse of the Society for Worldwide Interbank Financial Telecommunication system

To be able to move large amounts of money into controlled accounts, the attackers abused the Society for Worldwide Interbank Financial Telecommunication system. A well-configured SIEM solution could ensure a constant monitoring of all critical financial applications. If a particular application weren’t supported by QRadar out-of-the-box, appropriate parsing, mapping and categorization could be developed. Once custom data is properly normalized, a SIEM solution would be able to detect abnormal money transfers with anomaly correlation rules, if the following are true:

  • a single account has transferred over the limit

  • a single account has made many small transfers to one or several specific accounts

  • a total amount of transfers from one account in a specific timeframe passed the limit
  • many accounts made transfers to the same target account in a specific period


You can thwart it

The case we’ve just analyzed proves that companies are not helpless in their battle against APTs. It may sound strange, but even as sophisticated as they are, APTs have their weakness hiding in the letter “P.” Persistence, which is the most difficult to deal with, actually means that attackers leave a lot of traces in the course of their attacks. Thus security administrators well-armed with a relevant SIEM solution have multiple touchpoints to detect intruders and stop them before their illegal activities lead to dramatic data and money losses.

Easy Way to Download