Brandeis GPS Blog

Insights on online learning, tips for finding balance, and news and updates from Brandeis GPS

Tag: software as a service

So What Is the Risk of Mobile Malware?

By: Derek Brink

Originally from: https://blogs.rsa.com/risk-mobile-malware/

Obvious, or oblivious? Short-term predictions eventually tend to make us look like one or the other—as Art Coviello astutely noted in making his own predictions for the security industry in 2014—depending on how they actually turn out. (Long-term predictions, however, which require an entirely different level of thinking, are evaluated against a different scale. For example, check out the many uncannily accurate predictions Isaac Asimov made for the 2014 World’s Fair, from his reflections on the just-concluded 1964 World’s Fair.)

Art’s short-term prediction about mobile malware:

Chapa NO MALWARE2014 is the tipping point year of mobile malware: As businesses provide greater mobile access to critical business applications and sensitive data, and consumers increasingly adopt mobile banking, it is easy to see that mobile malware will rapidly grow in sophistication and ubiquity in 2014. We’ve already seen a strong uptick in both over the past few months and expect that this is just the beginning of a huge wave. We will see some high-profile mobile breaches before companies and consumers realize the risk and take appropriate steps to mitigate it. Interestingly, the Economist recently featured an article suggesting such fears were overblown. It is probably a good idea to be ready just the same.

The Economist article Art references (which is based on an earlier blog) asserts that “surprisingly little malware has found its way into handsets. . . smartphones have turned out to be much tougher to infect than laptops and desktop PCs.” (Ironically, the Economist also publishes vendor-sponsored content such as How Mobile Risks Are Pushing Companies Towards Better Security. I suppose that’s one way to beat the obvious or oblivious game: Place a bet on both sides.)

RSA’s Online Fraud Resource Center provides some terrific fact-based insights on the matter, including Behind the Scenes of a Fake Token Mobile App Operation.

But the legitimate question remains: What is the risk of malware on mobile? Let’s focus here on enterprise risks, and set aside the consumer risks that Art also raised as a topic for another blog.

Keep in mind the proper definition of “risk”—one of the root causes of miscommunication internet-security1among security professionals today, as I have noted in a previous blog—which is “the likelihood that a vulnerability will be exploited, and the corresponding business impact.” If we’re not talking about probabilities and magnitudes, we’re not talking about risk.

Regarding the probability of malware infecting mobile devices:

  • The Economist‘s article builds on findings from an academic paper published by researchers from Georgia Tech, along with a recent PhD student who is now the Chief Scientist at spin-off security vendor Damballa. Their core hypothesis is that the activities of such malware—including propagation and update of malicious code, command and control communications with infected devices, and transmission of stolen data—will be discernible in network traffic.
  • From three months of analysis, they found that about 3,500 mobile devices (out of a population of 380 million) were infected—roughly 0.001%, or 1 in 100,000.
  • Compare this to the computers cleaned per mille (CCM) metric regularly reported by Microsoft: For every 1,000 computers scanned by the Microsoft Malicious Software Removal Tool, CCM is the number of computers that needed to be cleaned after they were scanned. For 1H2012, the infection rates per 1,000 computers with no endpoint protection was between 11.6 and 13.6 per month.

All of this nets out to say that currently, mobile endpoints are three orders of magnitude less likely to be infected by malware than traditional endpoints.

But doesn’t this conflict with other published research about mobile malware? For example, I’ve previously blogged about an analysis of 13,500 free applications for Android devices, published in October 2012 by university researchers in Germany:

  • Of 100 apps selected for manual audit and analysis, 41 were vulnerable to man-in-the-middle (MITM) attacks due to various forms of SSL misuse.
  • Of these 41 apps, the researchers captured credentials for American Express, Diners Club, PayPal, bank accounts, Facebook, Twitter, Google, Yahoo, Microsoft Live ID, Box, WordPress, remote control servers, arbitrary email accounts, and IBM Sametime, among others.
  • Among the apps with confirmed vulnerabilities against MITM attacks, the cumulative installed base is up to 185 million users.

In another blog, I’ve noted that mobile applications have a more complex attack surface mobile-appthan traditional web applications—in addition to server-side code, they also deal with client-side code and (multiple) network channels. The impact of these threats is often multiplied, as in the common case of support for functions that were previously server-only (e.g., offline access). This makes security for mobile apps even more difficult for developers to address—mobile technology is not as well known, development teams are not as well educated, and testing teams are harder to keep current.

Meanwhile, malware on mobile is indeed becoming more prevalent: Currently over 350,000 instances from 300 malware families. It is also becoming more sophisticated—e.g., by obfuscating code to evade static and dynamic analysis, establishing device administration privileges to install additional code, and spreading code using Bluetooth, according to the IBM X-Force 2013 Mid-Year Trend and Risk Report.

But threats, vulnerabilities, and exploits are not risks. What would be obvious to predict is this: The likelihood of exploits based on mobile malware will increase dramatically in 2014—point Art.

The other half of the risk equation is the business impact of mobile exploits. From the enterprise perspective, we would have to estimate the cost of exploits such as compromise of sensitive corporate datasurveillance of key employees, and impersonation of key corporate identities—e.g., as part of attacks aimed at social networks or cloud platforms, where the mobile exploits are the means to a much bigger and more lucrative end. It seems quite reasonable to predict that we’ll see some high-profile, high-impact breaches along these lines in 2014—again, point Art.

Obvious or oblivious, you can put me down squarely with Art’s prediction for this one, with the exception that I would say the risk of mobile malware is much more concentrated and targeted than the all users/all devices scenario he seems to suggest.

About the Author:

BA8D94F2924E634831C8CA3D8E7179C7477BBC1Derek E. Brink, CISSP is a Vice President and Research Fellow covering topics in IT Security and IT GRC for Aberdeen Group, a Harte-Hanks Company. He is also a adjunct faculty with Brandeis University, Graduate Professional Studies teaching courses in our Information Security Program. For more blog posts by Derek, please see http://blogs.aberdeen.com/category/it-security/  and http://aberdeen.com/_aberdeen/it-security/ITSA/practice.aspx

Footerindesign

How Big Data Has Changed 5 Boston Industries

By: 

Emerging technologies have unlocked access to massive amounts of data, data that is mounting faster than organizations can process it. Buried under this avalanche of analytics are precious nuggets of information that organizations need to succeed. Companies can use these key insights to optimize efficiency, improve customer service, discover new revenue sources, and more. Those who can bridge the gap between data and business strategy will lead in our new economy.

Big Data’s potential impact on enterprises and industries as a whole is boundless. This potential is already being realized here in the Hub. Boston has been ahead of the curve when it comes to Big Data, thanks to our unique innovation ecosystem or our “Big Data DNA,” the Massachusetts Technology Leadership Council says. As a result, Boston is home to an especially high concentration of Big Data startups, but also powerhouse industries that have strategically leveraged analytics and transformed the space.

Check out how data and analytics has changed these five Boston industries.

1. Marketing & Advertising

Marketing & Advertising

In our age of online marketing, marketers have access to mountains of data. Pageviews, clicks, conversion, social shares…the list is endless. That doesn’t even account for the demographic data marketers collect and interpret every day.

These analytics have enabled marketers to access a more comprehensive report of campaign performances and in-depth view of buyer personas. Armed with these insights, marketers are able to refine their campaigns, improve forecasts, and advance their overall strategy.

Big Data also enables targeted marketing, a crucial component of today’s online strategy. You know those eerily accurate advertisements on your Facebook page? You can thank Big Data for that.

Analytics have unlocked enormous potential for marketers to better create, execute, and forecast campaigns. As a result, Boston has boomed with organizations entirely devoted to providing data-driven marketing solutions. HubSpot and Jumptap have emerged as leaders in this space, raising about $2.5 billion combined. Attivio, Visible Measures, DataXu are also leading marketing solutions providers.

2. Healthcare

Healthcare

It shouldn’t surprise that healthcare represents a top industry in Boston’s Big Data ecosystem. The healthcare industry collects and analyzes enormous volumes of clinical data on a daily basis. Partners Healthcare alone has some two billion data elements from over six thousand patients, according to the Massachusetts 2014 Big Data Report.

Big Data’s impact can be seen first and foremost with the electronic health record. Big Data has launched the electronic health record into the twenty-first century, revolutionizing patient care, and empowering the success of companies like athenahealth based in Watertown.

“The meaningful use of electronic health records is key to ensuring that healthcare focuses on the needs of the patient, is delivered in a coordinated manner, and yields positive health outcomes at the lowest possible cost,” the report said.

The space has expanded even more since Massachusetts passed legislation requiring all providers to adopt electronic health records and connect to the health information exchange, Mass HIway in 2012.

The Shared Health Research Informatics Network (SHRINE) is another local innovation linking five hospitals (Beth Israel Deaconess Medical Center, Children’s Hospital Boston, Brigham and Women’s, Massachusetts General Hospital and the Dana Farber Cancer Center) in a centralized database to improve efficiency and quality of care.

After genomic data and patient data from electronic medical records, medical devices like pacemakers or a Fitbit, for example, are the fastest-growing sources of healthcare data. All of these rich sources of information can – and are – being leveraged by Boston healthcare providers to improve care and lower costs.

 

3. Government

Government

The State of Massachusetts and the City of Boston lead the nation with a sophisticated public sector approach to data and analytics. Governor Patrick made Big Data part of policy, launching Massachusetts Big Data Initiative and supporting Mass Open Cloud Initiative, a public cloud that utilizes an innovative open and customizable model.  In 2009, the Commonwealth launched the “the Open Data Initiative” inviting the public to access the government’s data library from nearly every department.

But analytics’ impact on the public sector is only beginning. Big Data can significantly improve the quality and efficiency of city services, and do so at a lower cost. But most importantly, data will unlock the future of urban living. Imagine if we knew the location of every bus, train, car, and bike in real-time? Imagine if we knew the profiles of every city building? This is the vision of Boston’s future as a “connected city” outlined in Mass Technology Leadership Council’s 2014 report Big Data & Connected Cities.

“Boston is making great strides in using technology to improve how city services are delivered but we can and will do more,” said Boston Mayor Marty Walsh about MassTLC’s report.  “We are making vast amounts of the city’s big data available online to the public to not only increase transparency but to also spur innovation.”

Walsh has shown support for a data-driven, connected city and plans to hire a City of Boston Chief Digital Officer to help make this vision a reality.

4. Energy

Energy

Big Data is a big reason Boston has evolved as a leader in the energy industry. Tapping into Big Data yields much more comprehensive, accurate reports of energy usage and also illuminates how these building can operate more efficiently. As a result, the industry has boomed with companies helping buildings go green to save green, including local leaders EnerNoc, Retroficiency, and NextStepLiving. Buildings in Boston and beyond are being constructed or retrofitted with building automation systems – cloud-based, centralized control centers – which collect massive amounts of data, report on energy consumption in real-time, and can continually adjust building performance for optimum efficiency. This “smart” living is the wave of the future and entirely driven by Big Data.

5. Financial Services

Financial Services

Financial services is the fifth largest vertical for Big Data in Massachusetts. Big Data has made it possible to analyze financial data sets that previously weren’t accessible. Financial analysts now can examine and interpret unprecedented amounts of information and do so in new and innovative ways. For example, stock traders can collect and mine mass amounts of social media information to gauge public sentiment about products or companies, Information Week said.

Top companies Fidelity Investments, Pricewaterhouse Coopers, Baystate Financial, LLC and others in Boston’s financial services sector heavily depend on big data to compile reports, forecast market future, and guide their decisions.

Footerindesign

Cloud Computing and the OpenStack Advantage

by: Nagendra Nyamgondalu, Senior Engineering Manager at IBM India and Brandeis Graduate Professional Studies Master of Software Engineering Alum

It was only a few years back that most IT managers I spoke to would smirk when they heard  the  term  “cloud” in  a  conversation.  They  either  didn’t  believe  that  cloud cloud-iaas computing  would  be  viable  for  their  businesses’  IT  needs  or  were  skeptical  about  the maturity  of  the  technology.  And  rightly  so.  But,  a  lot  has  changed  since  then.  The  technology, tools and services available for businesses considering adoption of a public cloud, setting up their own private cloud or treading the middle path of a hybrid one, has  made  rapid  strides.  Now,  the  same  IT  managers  are  very  focused  on  deploying  workloads and applications on the cloud for cost reduction and improved efficiency.

Businesses  today  have  the  choice  of  consuming  Infrastructure  as  a  service  (IaaS),  Platform as a service (PaaS) and Software as a service (SaaS). As you can imagine, these models map directly to the building blocks of a typical data center. Servers, storage and networks form the infrastructure on top of which, the required platforms are built such as databases, application servers or web servers and tools for design and development. Once the two foundational layers are in place, the applications that provide the actual business value can be run on top. While all three models are indisputable parts of the bigger picture that is Cloud Computing, I have chosen to focus on IaaS here. After all, infrastructure is the first step to a successful IT deployment.

Essentially, IaaS is the ability to control and automate pools of resources, be it compute, storage,  network  or  others  and  provision  it  on-­‐demand.  Delivering  IaaS  requires  technology  that  provides  efficient  and  quick  provisioning,  smart  scheduling  for deployment  of  virtual  machines  and  workloads,  support  for  most  hardware  and  of  course, true scalability. OpenStack is an open source framework founded by Rackspace Hosting  and  NASA  that  takes  a  community  approach  to  make  all  this  possible.  It  was  designed  with  scalability  and  elasticity  as  the  overarching  theme  and  a  share­nothing, distribute-­‐everything approach. This enables OpenStack to be horizontally scalable and asynchronous. Since inception, the community has grown to a formidable number with many  technology  vendors  such  as  IBM,  Cisco,  Intel,  HP  and  others  embracing  it.  The  undoubted advantage that a community-­‐based approach brings, especially to something like IaaS, is the extensive support for a long list of devices and cloud standards. When a new type of storage or a next generation network switch is introduced to the market, the vendors have a lot to gain by contributing support drivers for their offerings to the community. Similar support for proprietary technology has dependencies on customer demand and the competitive dynamics amongst the vendors -­‐ this almost always results in delayed support, if that. While proprietary versus open source is always a debate, the innovation and cost benefits that open alternatives have provided in the recent years, has  clearly  made  CIOs  take  notice.  Support  for  a  variety  of  hypervisors,  Open  APIs,  support  for  object  or  block  storage  and  the  mostly  self-­‐sufficient  management capabilities are some of the common themes I hear on why businesses are increasingly adapting OpenStack. Additionally, the distributed architecture cloud_securityof OpenStack where each component (such as Compute, Network, Storage & Security) runs as a separate process connected  via  a  lightweight  message  broker,  makes  it  easy  for  ISVs  looking  to  build  value-­‐adds  on  top  of  the  stack.  All  the  right  ingredients  for  a  complete  cloud management solution for IaaS.

Most  IT  managers  dream  of  the  day  when  every  request  for  infrastructure  is  satisfied  instantly by the click of a button regardless of the type being requested, workloads run smoothly and fail-­‐over seamlessly when there is a need to, resource usage is constantly optimal  and  adding  additional  hardware  to  the  pool  is  a  smooth  exercise.  Business  managers dream of the day when they have instant access to the infrastructure needed to run their brand new application and once it is up, it stays up. Aaah Utopia.

The good news is it is possible here and now.

21495fc

 

Nagendra Nyamgondalu is a Senior Engineering Manager at IBM in India. He is a 2003 graduate from Brandeis University, Graduate Professional Studies’ Master of Software Engineering Program.

 

Design Your Agile Project, Part 1

by: Johanna Rothman

Find the original post here: http://www.jrothman.com/blog/mpd/2014/03/design-your-agile-project-part-1-2.html

The more I see teams transition to agile, the more I am convinced that each team is unique. Each project is unique. Each organizational context is unique. Why would you take an off-the-shelf solution that does not fit your context? (I wrote Manage It! because I believe in a context-driven approach to project management in general.)

One of the nice things about Scrum is the inspect-and-adapt approach to it. Unfortunately, most people do not marry the XP engineering practices with Scrum, which means they don’t understand why their transition to agile fails. In fact, they think that Scrum alone,without the engineering practices, is agile. How many times do you hear “Scrum/Agile”? (I hear it too many times. Way too many.)

I like kanban, because you can see where the work is. “We have a lot of features in process.” Or, “Our testers never get to done.” (I hate when I hear that. Hate it! That’s an example of people not working as a cross-functional team to get to done. Makes me nuts. But that’s a symptom, not a cause.) A kanban board often provides more data than a Scrum board does.

Can there be guidelines for people transitioning to agile? Or guidelines for projects in a program? There can be principles. Let’s explore them.

The first one is to start by knowing how your product releases, starting with the end in mind. I’m a fan of continuous delivery of code into the code base. Can you deliver your product that way? Maybe.

How Does Your Product Release?

I wish there were just two kinds of products: those that released continuously, as in Software as a Service, and those with hardware, that released infrequently. The infrequent releases release that way because of the cost to release. But, there’s a continuum of release frequency:

Potential Release Frequency

How expensive is it to release your product? The expense of release will change your business decision about when to release your product.

You want to separate the business decision of releasing your product from making your software releasable.

That is, the more to the left of the continuum you are, the more you can marry your releases to your iterations or your features, if you want. Your project portfolio decisions are easier to make, and they can occur as often as you want, as long as you get to done, every feature or iteration.

The more to the right of the continuum you are, the more you need to separate the business decision of releasing from finishing features or iterations. The more to the right of the continuum, the more important it is to be able to get to done on a regular basis, so you can make good project portfolio decisions. Why? Because you often have money tied up in long-lead item expenses. You have to make decisions early for committing to hardware or Non Recurring Engineering expenses.

How Complex is Your Product?

Let’s look at the Cynefin model to see if it has suggestions for how we should think about our projects:

CynefinI’ll talk more about you might want to use the Cynefin model to analyze your project or program in a later post. Sorry, it’s a system, and I can’t do it all justice in one post.

In the meantime, take a look at the Cynefin model, and see where you think you might fall in the model.

Do you have one collocated cross-functional team who wants to transition to agile? You are in the “known knowns” situation for agile. As for your product, you are likely in the “known unknowns” situation. Are you willing to use the engineering practices and work in one- or two-week iterations? Almost anything in the agile or lean community will work for you.

As soon as you have more than one or two teams, or you have geographically distributed teams, or you are on the right hand side of the “Potential for Release Frequency” chart above, do you see how you are no longer in the “Complicated” or “Obvious” side of the Cynefin model? You have too many unknowns.

Where Are We Now?

Here are my principles:

  1. Separate the business decision for product release from the software being releasable all the time. Whatever you have for a product, you want the software to be releasable.
  2. Understand what kind of a product you have. The closer you are to the right side of the product release frequency, the more you need a program, and the more you need a kanban to see where everything is in your organization, so you can choose to do something about them.
  3. Make sure your batch size is as small as you can make it, program or project. The smaller your features, the more you will see your throughput. The shorter your iteration, the more feedback you will obtain from your product owner and “the business.” You want the feedback so you can learn, and so your management can manage the project portfolio.
  4. Use the engineering practices. I cannot emphasize this enough. If you do not keep your stories small so that you can develop automated unit tests, automated system tests, use continuous integration, swarm around stories or pair, and use the XP practices in general, you will not have the safety net that agile provides you to practice at a sustainable pace. You will start wondering why you are always breathless, putting in overtime, never able to do what you want to do.

If you have technical debt, start to pay it down a little at a time, as you implement features. You didn’t accumulate it all at once. Pay it off a little at a time. Or, decide that you need a project to prevent the cost of delay for release. If you are a technical team, you have a choice to be professional. No one is asking you to estimate without providing your own safety net. Do not do so.

This post is for the easier transitions, the people who want to transition, the people who are collocated, the people who have more knowns than unknowns. The next post is for the people who have fewer knowns. Stay tuned.

Johanna Rothman

Bigger than “Cloud Computing”

by: Ari Davidow

It’s textbook season once again. That’s the time of year when I go through new textbooks for next semester’s course.

Cloud-Computing-capThe good news is, “Cloud Computing,” a subject so out on the edge when it was first offered four years ago that it was a “special topic,” is now relatively main stream. The bad news is, the textbooks still focus on how to teach network administrators how to set up cloud services. Which wouldn’t be a bad class, and it is certainly useful to IT professionals, but it isn’t the class that we teach here at Brandeis.

My course focuses as much on how “Cloud Computing” is changing how we do our jobs, as it does on the practicalities of using common Cloud infrastructure. We don’t neglect becoming familiar with common Cloud “Infrastructure as a Service” components such as: storage, queue servicing, database and web servers and the like. But that is a limited corner of the field.

I first realized how far ahead of the times our course was when I saw one of the computing consulting groups, IDC, refer to the topics we address as “The Third Platform.” Turns out, by focusing on the different types of Cloud Computing platforms, spending time considering related issues (“Big Data” and how “mobile computing” affects it all), we were focusing attention on what IDC feels is a major shift in computing. A shift so large it is comparable to the switch from mainframes to personal computers not so many years ago.

Additionally, the IDC report accidentally highlights how we create courses. Sometimes, when we’re teaching a language or computing system, we focus on the basics of just learning that language or platform. If you take a Ruby class, or a class in Analytics, you’ll get a good grounding in those disciplines. But with Cloud Computing we are talking about changes in technology that are changing everything around them.

SaaSSoftware as a Service (SaaS) has radically changed how Enterprise applications are purchased and maintained. Infrastructure as a Service (IaaS) has changed the way start-ups work and thoroughly changed the economics of putting new ideas to the test. The proliferation of mobile devices has similarly destroyed the likelihood that network security is as simple as thinking in terms of one person/one device, most of which are physically hooked up to the network. This is a paradigm already challenged by the need to integrate SaaS services with the rest of the network.

When you sign up for “Cloud Computing” this summer, you are signing up to explore the entire “Third Platform.” We’ll also walk you through some bare metal Cloud Computing basics and have some big fun with Big Data. I look forward to seeing you soon.

P.S. As with all Brandeis GPS classes, you can participate with whatever computing device is convenient to you—your computer, your tablet or smartphone. We like to practice what we teach.

footer

© 2023 Brandeis GPS Blog

Theme by Anders NorenUp ↑

Protected by Akismet
Blog with WordPress

Welcome Guest | Login (Brandeis Members Only)