Brandeis GPS Blog

Insights on online learning, tips for finding balance, and news and updates from Brandeis GPS

Month: September 2014

The Changing Face & Motivations of the Online Learner

By: – Associate Editor, BostInno

More than 7.1 million students are taking at least one online course. Yet, although the need and necessity for lifelong learning is clear, the textbook definition of a “student” is becoming increasingly blurry — equally as fast as the field is evolving.

As Penn Foster CEO Frank Britt noted, “In the age of virtual classrooms, unlimited by space or time, education is becoming more and more accessible — and affordable.” The benefits have continued attracting the masses latching on to online learning, whether to further their career or further themselves.

VirtualClassRoomResearch surfaced in mid-2012 painting the picture of today’s average online learner: A white, 33-year-old female with a full-time job that pays roughly $65,000 per year. Her aspirations gravitate toward those of the world’s most prosperous business magnates, and she is looking for a way to quickly advance in her current career.

Now, the description differs by platform and case. Local online learning nonprofit edX just released working papers on 17 of its courses, stating the most typical registrant overall was a male with a bachelor’s degree, age 26 or older. Given it’s a massive open online course rather than a degree-granting one, however, that profile still only describes 31 percent of students enrolled.

Whether a massive open online platform or university-sponsored, accredited one, online students tend to share several similarities, particularly their motivation for taking to the Internet to learn, such as:

The Drive to Further Their Current CareerHow-To-Minimize-Distractions

For individuals slaving away at full-time jobs, online learning provides an opportunity for them to get ahead, yet around their own schedule. When Brandeis created its Graduate Professional Studies’ program, which offers online master’s degrees, the University ensured it was “flexible” and made with working professionals in mind. There’s no easier way to advance in your chosen field than by educating yourself in that field.

The Need to Find Work-Life Balance

Flexibility is also a perk for individuals heading their household. For parents who dream of going back to school, yet have a family full of alternating schedules they need to coordinate around, online education provides the work-life balance they crave. Students can learn at their own pace, and when it’s most convenient for them.

The Need for Access

Around the world, lifelong learners crave an education their underserved community might not be able to adequately provide. For thousands, online learning opens up doors previously unimagined and fills the education divide — paving the way for a future once deemed impossible.

The Desire to Test Drive

An added benefit of online courses is that they allow people to test out fields before fully jumping into them. Take Brandeis’s GPS, which allows students to take up to two courses before applying to a degree program without any strings attached. Anyone considering changing fields, but hesitant to make major moves, should engage in, oftentimes, free online education to dFLIPPEdetermine whether or not they’re ready to take the plunge.

As Britt argued, the way we define “students” is changing, and that’s because of the Internet. Education no longer needs to be a luxury of the few, but rather a right of the masses. There’s no typical learner anymore.

To help determine the option that’s right for you, click here

Original post here

Footerindesign

Fuzzy Math: The Security Risk Model That’s Actually About Risk

By: Derek Brink

Reblogged from: https://blogs.rsa.com/fuzzy-math-security-risk-model-thats-actually-risk/

Sharpen your number two pencils everyone and use the following estimates to build a simple risk model:

  • Average number of incidents: 12.5 incidents per month (each incident affects 1 user)
  • Average loss of productivity: 3.0 hours per incident
  • Average fully loaded cost per user: $72 per hour

Based on this information, what can your risk model tell me about the security risk?

My guess is that your initial answer is something along the lines of “the average business impact is $2,700 per month,” which you obtained by the following calculation:

12.5 incidents/month * 3.0 hours/incident * $72/hour = $2,700/month

But in fact, this tells us almost nothing about the risk—remember that risk is defined as the likelihood of the incident, as well as the magnitude of the resulting business impact. If internet-security1we aren’t talking about probabilities and magnitudes, we aren’t talking about risks! (We can’t even say that 50% of the time the business impact will be greater than $2,700, and 50% of the time it will be less—that would be the median, not the mean or average. Even if we could, how useful would that really be to the decision maker?)

Let’s stay with this simplistic example, and say that your subject matter experts actually provided you with the following estimates:

  • Number of incidents: between 11 and 14 per month
  • Loss of productivity: between 1 and 5 hours per incident
  • Fully loaded cost per user: between $24 and $120 per hour

This is much more realistic. As we have discussed in “What Are Security Professionals Afraid Of?,” the values we have to work with are generally not certain. If we knew with certainty what was going to happen and how big an impact it would have, it wouldn’t be a risk!

Based on these estimates, what would your risk model look like now?

For many of us, our first instinct would be to use the average for each of the three ranges to compute an “expected value”, which is of course exactly the result that we got before.

Some of us might try to be more ambitious, and compute an “expected case,” a “low case,” riskand a “high case”—by using the average and the two extremes of the three ranges:

  • Expected case = 12.5 * 3.0 * $72 = $2,700/month
  • Low case = 11 * 1.0 * $24 = $260/month
  • High case = 14 * 5.0 * $120 = $8,400/month

It would be tempting to say that the business impact could be “as low as $260/month or as high as $8,400/month, with an expected value of $2,700/month.” But again, this does not tell us about risk. What is the probability of the low case, or the high case? What is the likelihood that the business impact will be more than $3,000 per month, which happens to be our decision-maker’s appetite for risk?

Further, we would be ignoring the fact that the three ranges in our simple risk model actually move independently—i.e., it isn’t logical to assume that fewer incidents will always be of shorter duration and lower hourly cost, or the converse.

Unfortunately, this is the point at which so many security professionals throw up their hands at the difficulty of measuring security risks and either fall back into the trap of techie-talk or gravitate towards qualitative 5×5 “risk maps.”

The solution to this problem is to apply a proven, widely used approach to risk modeling called Monte Carlo simulation. In a nutshell, we can carry out the computations for many (say, a thousand, or ten thousand) scenarios, each of which uses a random value from our estimated ranges. The results of these computations are likewise not a single, static number; the output is also a range and distribution, from which we can readily describe both probabilities and magnitudes—exactly what we are looking for!

Staying with our same simplistic example, we can use those estimates provided by our subject matter experts plus the selection of a logical distribution for each range. Here are my choices:

  • Number of incidents: Between 11 and 14 incidents per month—I will use a uniform distribution, meaning that any value between 11 and 14 is equally likely.
  • Loss of productivity: Between 1 and 5 hours per incident—I will use a normal distribution (the familiar bell-shaped curve), meaning that the values are most likely to be around the midpoint of the range.
  • Fully loaded cost per user: Between $24 and $120 per hour—I will use a triangular distribution, to reflect the fact that the majority of users are at the lower end of the pay scale, while still accommodating the fact that incidents will sometimes happen to the most highly paid individuals.

The following graphic provides a visual representation of the three approaches.

Based on a Monte Carlo simulation with one thousand iterations—performed by using program-hero-infosec1standard functions available in an Excel spreadsheet—we can advise our business decision makers with the following risk-based statements:

  • There is a 90% chance that the business impact will be between $500 and $4,500 per month.
  • There is an 80% likelihood that the business impact will be greater than $1,000 per month.
  • The mean (average) business impact is about $2,100 per month—note how this is significantly lower than the $2,700 figure computed earlier; the difference is in the use of the asymmetrical triangular distribution for one of the variables.
  • There is a 20% likelihood that the business impact will be greater than $3,000 per month.

If warranted, we can try to reduce the uncertainty of this analysis even further by improving the estimates in our risk model. (There will be more to come, in upcoming blogs, on that.)

What to do, of course, depends entirely on each organization’s appetite for risk. But as security professionals, we will have done our jobs, in a way that’s actually useful to the business decision maker.

About the Author:

BA8D94F2924E634831C8CA3D8E7179C7477BBC1Derek E. Brink, CISSP is a Vice President and Research Fellow covering topics in IT Security and IT GRC for Aberdeen Group, a Harte-Hanks Company. He is also a adjunct faculty with Brandeis University, Graduate Professional Studies teaching courses in our Information Security Program. For more blog posts by Derek, please see http://blogs.aberdeen.com/category/it-security/  and http://aberdeen.com/_aberdeen/it-security/ITSA/practice.aspx

Click here to subscribe to our blog!

Image and video hosting by TinyPic

Footerindesign

Is Healthcare the Next Frontier for Big Data?

By:– Custom Content Coordinator, BostInno

The health care industry has always been at the center of emerging technology as a leader in the research and application of advanced sciences. Now, more than ever, the industry is on the edge of an innovation boom. Health care information technology possesses vast potential for advancement, making the field fertile ground for game-changing innovation and the next great frontier for big data.

The use of electronic health records (EHR), electronic prescribing, and digital imaging by health care providers has exploded in recent years, Health Affairs reports and the global program-hero-itm1health information exchange (HIE) market is projected to grow nearly ten percent per year, reaching $878 million in 2018, according to Healthcare Informatics.

But despite massive growth, health care IT faces a number of barriers slowing advancement.

When it comes to health information technologies, demand is outpacing delivery. Users desire higher levels of performance beyond the capacity of current IT solutions.

“Providers certainly want to do things that vendor technology doesn’t allow right now,” Micky Tripathi, Ph.D., CEO of the Massachusetts eHealth Collaborative (MAeHC), said to Healthcare Informatics.

One reason technology is lagging is health care IT systems are independently developed and operated. Rather than one massive network, there are numerous “small shops developing unique products at high cost with no one achieving significant economies of scale or scope,” Health Affairs reported. As a result, innovations are isolated, progress is siloed, and technology cannot meaningfully advance.

To deliver the highest quality of care, the health care community must unite disparate systems in a centralized database. But, this is easier said than done. The industry must be sure to maintain the highest standards of security complying with Health Insurance Portability and Accountability Act of 1996 (HIPAA). Medizin

As a result, the health care IT industry currently faces a crucial challenge: devise an overarching system that guarantees security, sustainability, and scale.

The key to unlocking solutions is Big Data are the informaticians who translate mountains of statistics into meaningful healthcare IT applications.

“The growing role of information technology within health-care delivery has created the need to deepen the pool of informaticians who can help organizations maximize the effectiveness of their investment in information technology—and in so doing maximize impact on safety, quality, effectiveness, and efficiency of care,” the American Medical Informatics Association noted. The future of health care hinges on the ability to connect the big data dots and apply insights to a creating and practicing a smart IT strategy.

Organizations have thrown themselves into the big data trenches to innovate solutions to the problem facing their industry. Ninety-five percent of healthcare CEOs said they were exploring better ways to harness and manage big data, a PricewaterhouseCoopers study reported. With the commitment of the health care community, plus the right talent and resources, industry-advancing innovations won’t be far behind.

Health care is indisputably the next great frontier for big data. How we seek, receive, and pay for health care is poised to fundamentally change and health care informaticians will be leading the evolution.

Click here to subscribe to our blog!

Footerindesign

 

Managers Manage Ambiguity

by: Johanna Rothman

Reblogged from: http://www.jrothman.com/blog/mpd/2014/08/managers-manage-ambiguity.html

I was thinking about the Glen Alleman’s post, All Things Project Are Probabilistic. In it, he says, Management is Prediction as a inference from Deming. When I read this quote,

If you can’t describe what you are doing as a process, you don’t know what you’re doing. –Deming

I infer from Deming that managers must manage ambiguity.

Here’s where Glen and I agree. Well, I think we agree. I hope I am not putting words into Glen’s mouth. I am sure he will correct me if I am.

Managers make decisions based on uncertain data. Some of that data is predictive data.

For example, I projectestimatessuggest that people provide, where necessary, order-of-magnitude
estimates of projects and programs. Sometimes you need those estimates. Sometimes you don’t. (Yes, I have worked on programs where we didn’t need to estimate. We needed to execute and show progress.)

Now, here’s where I suspect Glen and I disagree:

  1. Asking people for detailed estimates at the beginning of a project and expecting those estimates to be true for the entire project. First, the estimates are guesses. Second, software is about learning, If you work in an agile way, you want to incorporate learning and change into the project or program. I have some posts about estimation in this blog queue where I discuss this.
  2. Using estimation for the project portfolio. I see no point in using estimates instead of value for the project portfolio, especially if you use agile approaches to your projects. If we finish features, we can end the project at any time. We can release it. This makes software different than any other type of project. Why not exploit that difference?Value makes much more sense. You can incorporate cost of delay into value.
  3. If you use your estimate as a target, you have some predictable outcomes unless you get lucky: you will shortchange the feature by decreasing scope, incur technical debt, or increase the defects. Or all three.

What works for projects is honest status reporting, which traffic lights don’t provide. Demos provide that. Transparency about obstacles provides that. The ability to be honest about how to solve problems and work through issues provides that.

Much has changed since I last worked on a DOD project. I’m delighted to see that Glen writes that many government projects are taking more agile approaches. However, if we always work on innovative, new work, we cannot predict with perfect estimation what it imageswill take at the beginning, or even through the project. We can better our estimates as we proceed.

We can have a process for our work. Regardless of our approach, as long as we don’t do code-and-fix, we do. (In Manage It! Your Guide to Modern, Pragmatic Project Management, I say to choose an approach based on your context, and to choose any lifecycle except for code-and-fix.)

We can refine our estimates, if management needs them. The question is this: why does management need them? For predicting future cost for a customer? Okay, that’s reasonable. Maybe on large programs, you do an estimate every quarter for the next quarter, based on what you completed, as in released, and what’s on the roadmap. You already know what you have done. You know what your challenges were. You can do better estimates. I would even do an EQF for the entire project/program. Nobody has an open spigot of money.

But, in my experience, the agile project or program will end before you expect it to. (See the comments on Capacity Planning and the Project Portfolio.) But, the project will only ProjectManagement_03end early if you evaluate features based on value and if you collaborate with your customer. The customer will say, “I have enough now. I don’t need more.” It might occur before the last expected quarter. It might occur before the last expected half-year.

That’s the real ambiguity that managers need to manage. Our estimates will not be correct. Technical leaders, project managers and product owners need to manage risks and value so the project stays on track. Managers need to ask the question: What if the project or program ends early?

Ambiguity, anyJohanna Rothmanone?

Click here to subscribe to our blog!

Footerindesign

© 2023 Brandeis GPS Blog

Theme by Anders NorenUp ↑

Protected by Akismet
Blog with WordPress

Welcome Guest | Login (Brandeis Members Only)