Brandeis GPS Blog

Insights on online learning, tips for finding balance, and news and updates from Brandeis GPS

Tag: informatics

SPOTLIGHT ON JOBS: BERG, LLC

vintage theatre spot light on black curtain with smoke

SPOTLIGHT ON JOBS

Members of the Brandeis GPS Community may submit job postings from within their industries to advertise exclusively to our community. This is a great way to further connect and seek out opportunities as they come up. If you are interested in posting an opportunity, please complete the following form found here.

Where: BERG Health, LLC Framingham, MA

About: Berg focuses our research on understanding how alterations in metabolism relate to disease onset. The company has a deep pipeline of early-stage technologies in CNS diseases and metabolic diseases that complement its late-stage clinical trial activity in cancer and prevention of chemotoxicity.  Armed with use of the Interrogative Biology™ discovery platform that translates biological output into viable therapeutics and a robust biomarker library, Berg is poised to realize its pursuit of a healthier tomorrow

Position: Data Scientist–Healthcare Analytics

The Healthcare Analytics team is seeking a highly motivated, meticulous and detail-oriented individual for a rapidly growing multi-disciplinary team. The candidate will be instrumental in analyzing and making inferences from healthcare big data and must be goal oriented and should have strong background in statistics, epidemiology and possess some programming skills. The candidate should also be a quick learner, extremely flexible and able to adapt to needs of the project.

Responsibilities:

  • Perform meticulous and well thought-out data analysis for hypothesis testing on healthcare big data.
  • Development and execution of data analysis protocols to support company’s discovery pipeline.
  • Detailed documentation of data analysis methods and findings.
  • Presentation of scientific results internally and externally.

Requirements:

  • Requires a Ph.D. or Masters with 5+ years of relevant experience in Statistics, Epidemiology, Public Health, Data Science or related field.
  • Strong skills in statistics and study design.
  • Experience working with healthcare claims, pharmacy and EMR data is a highly desirable.
  • Proficiency in R, MySQL and Perl is preferred.
  • Proven ability to find creative, practical solutions to complex problems.
  • Excellent communication and interpersonal skills combined with superior and proven track record of technical and organizational skills.
  • Must be able to work in team-oriented environment and demonstrate attention to detail and record keeping.

 

Anyone interested in applying to this position may send their resume, cover letter and three references to hr-68931@berghealth.com.

May sure to reference seeing this position through the Brandeis GPS job spotlight post.

 

Click here to subscribe to our blog!

Footerindesign

SPOTLIGHT ON JOBS: Bioinformatician/Microbiologist at IHRC, Inc.

vintage theatre spot light on black curtain with smoke

SPOTLIGHT ON JOBS

Members of the Brandeis GPS Community may submit job postings from within their industries to advertise exclusively to our community. This is a great way to further connect and seek out opportunities as they come up. If you are interested in posting an opportunity, please complete the following form found here.

Title:  Bioinformatician/Microbiologist, Full Time–assigned to the Enteric Diseases Laboratory Branch (EDLB) within the Division of Foodborne, Waterborne, and Environmental Diseases (DFWED) at the US Centers for Disease Control and Prevention (CDC)

Where:  IHRC, Inc.– 2 Ravinia Drive-Suite 1750, Atlanta, GA 30346

About:   IHRC, Inc. provides scientific, information management and administrative program support to various Centers, Institutes and Offices of the Centers for Disease Control and Prevention (CDC) under several contracts.

 

 
   MAJOR DUTIES AND RESPONSIBILITIES:

•Work very closely with EDLB Bioinformaticians to process raw sequence data produced by Sanger and NGS, to identify, characterize and annotate genes found in sequence data, perform evolutionary and phylogenetic studies of sequence data, identify unique regions of pathogen genomes for use as targets in molecular diagnostic assays, and design oligonucleotide primers/probes for use in sequencing projects and foodborne bacteria pathogen detection assays.

•Analyze genetic sequencing data utilizing new methodologies or existing techniques that have been properly revised and validated.

•Develop, evaluate, and validate bioinformatics tools to establish correlations between unique genetic markers (extracted from whole genome sequences) and the species/serotype of foodborne bacterial pathogens.

•Perform standard molecular biology techniques, including but not limited to, manual and automated nucleic acid extraction, Polymerase Chain Reaction (PCR), Sanger sequencing, and Next Generation Sequencing (NGS) to develop and validate schemes to rapidly identify and subtype bacterial pathogens from complex matrices.

•Evaluate novel approaches to address human and commensal bacterial DNA in disease state stool, including clutter mitigation.

•Assist with the establishment of procedures for Quality Assurance (QA) and Quality Control (QC) of sequence data analysis in the context of this project.

•Assist in implementation of the developed assays by training laboratorians from state, county and local public health laboratories.

•Prepare reports, charts, graphs, and presentations as required.

 

   Minimum Qualifications and Technical Requirements:

•Must have successfully completed a full 4-years course of study in an accredited college or university leading to a bachelor’s and a higher degree (Master’s or Ph.D.) with at least 24 semester hours in mathematics, statistics, bioinformatics, or informatics and 24 semester hours molecular biology, microbiology, or genetics.

•The successful candidate will have a 2-3 years demonstrated experience with bioinformatics, bioinformatics analytical tools, microbiology, and molecular biology laboratory techniques.

•Be proficient in the use of various whole genome sequence laboratory procedures.

•Be proficient in the analysis, assembly, and annotation of genomic data from a variety of sequencing platforms such as Illumina and PacBio.

•Have strong interpersonal skills since the work is to occur in a multidisciplinary team environment and extensive interaction with people across CDC and external partners is required.

•Have strong written and oral presentation skills.

•Be capable of developing and executing research tasks independently.

•REQUIRED experience with Illumina sequencing technology, molecular biology assay design, UNIX (BASH scripting) and cluster computing (SGE); experience with coding in PERL and/or PYTHON; experience with relational databases (i.e. MySQL, Microsoft SQL server), Microsoft Excel, Microsoft Word.

•DESIRABLE experience with Microsoft Access, SAS, experience with metagenomics databases such as Kraken and MetaPhlAn, prior work with bacterial genome sequencing and analysis.

•The candidate must possess excellent oral and written communication

 

Contact Person for Job Opportunity:

Michael Astwood

678-615-3220

mastwood@ihrc.com

 

To apply for this position visit www.ihrc.com/careers . If already on the IHRC website, click on the job you are interested in and click on the apply button at the bottom of the page.

IHRC, Inc. is an equal opportunity and affirmative action employer. It is the policy of IHRC, Inc. to provide equal employment opportunities without regard to race, color, religion, sex, national origin, age, veteran or disability status and to take affirmative action in accordance with applicable laws and Executive Orders.

Click here to subscribe to our blog!

Footerindesign

Brandeis GPS Commencement Wrap-Up

Written by: Kelsey Whitaker, A Senior at Brandeis University

StudentMarshall

Amyntrah Maxwell & Rabb VP Karen Muncaster

On May 17th, to the sounds of “Pomp and Circumstance”, the Rabb School of Graduate Professional Studies‘ class of 2015 donned their caps and gowns and received their diplomas. The ceremony awarded Master’s degrees in various fields  including: Bioinformatics, Health and Medical Informatics, Information SecurityIT ManagementProject & Program Management and Software Engineering. Students sat proudly and  enjoyed the student and main commencement speaker’s words of wisdom for their future.   As working professionals in their respective fields, each degree recipient juggled work, school, and personal  matters in order to earn their master’s degree.

Luis

Student speaker, Louis Rosa III

The student speaker for the day was Louis Rosa III, who earned his Doctor of Medicine from Georgetown University’s School of Medicine and has over 30 years of experience in the fields of neurosurgery and radiation therapy. However, the day of commencement Rosa walked out with a newly earned diploma in Health and Medical Informatics. After all of his experience, why did Rosa pursue his degree from Brandeis GPS? “No matter how many patients I saw, I couldn’t have enough of an impact,” Rosa explained. Rosa went on to explain the impact his new degree would have on his career and his life.

The main Commencement speaker, Curtis H. Tearte, is a 1973 Brandeis graduate and also a  current Board of Trustees member. Tearte has vast experience in technology and business as former director, vice president and general manager of IBM “My experience at Brandeis exponentially changed the arc of my life,” he explained to the graduates. Tearte is also the founder of Tearte Associates, a firm dedicated to seeking out students with academic potential to become Tearte Scholars through his Family Foundation. His advice to the graduates was, “Keep putting out good and it will come back to us tenfold in unexpected ways.”

Commencement speaker, Curtis H. Tearte

In addition to the speakers, the Outstanding Teacher Award Recipient was presented to Leanne Bateman. Teaching at Brandeis since 2007, Bateman serves as Academic Program Chair and a faculty member for Project and Program Management and Strategic Analytics. Congrats, Leanne!

Congratulations to the 2015 graduates! You did it! Good luck in all your future plans and endeavors.

Want to see the live stream of commencement? You’re in luck! Watch it here.

Click here to subscribe to our blog!

Footerindesign

 

The Luxury of Less

by: Katherine S Rowell author of “The Best Boring Book Ever of Select Healthcare Classification Systems and Databases” available now!

Originally posted here

I often find myself torn between wanting to get as much useful information as possible onto a single page of the reports and dashboards we design and build, and my love of white space, or “The Luxury of Less.” In a page lay-out, white space (also called “negative space”) is the portion of a page deliberately left unmarked. When well chosen and placed, it is a key contributor to attractive, effective design. Done poorly, it can make a page appear incomplete or even pretentiously minimal.

Consider the following example of the potential power of white space, illustrated by Edward Tufte’s redesign of a table of cancer survival statistics.

Original Table:

Source: Hermann Brenner, "Long-term survival rates of cancer patient[s] achieved by the end of the 20th century; a period analysis," The Lancet, 360 (October 12, 2002), 1131-1135.

Tufte First Iteration Table Redesign:

Source: Edward Tufte

Second Table-Graph Iteration:

Source: Edward Tufte

The original table, which is similar to the ones we are accustomed to seeing in scientific publications, is ordered by body system and is perfectly adequate for the look-up and comparison of values, including details about the Standard Error (SE) of each value-that is, it serves its purpose. But could it be improved?

Tufte’s first redesign highlights a particular (and newly featured) aspect of the data: five-year survival rates by type of cancer. Notice how each row in this re-done table has a bit more white space: heavy black lines framing the titles and column-headings, and parentheses around the standard errors, have been removed, giving some visual respite and making the figures more legible. The re-categorization of the data also makes a trend it illustrates somewhat easier to spot (take a minute to look at the information in the first column and follow it across; you’ll see it). The entire table looks and feels cleaner, and especially for research publications that require reporting and display of all relevant statistics, this table redesign works very well.

The third table-graphic-a hybrid of the two forms-provides yet another view, and a different data-visualization lesson. It presents the viewer with a clear picture of survival time gradients, illustrating the slope of survival rates for each type of cancer. In this last table-graphic, there is an even greater use of white space, and every visual element contributes directly to understanding-simply, elegantly, clearly. The use of space coupled with a line to show the slope of change leaves no doubt about the story in the data.

Although I see no compelling reason why a view like this couldn’t be used in a research publication (adding back in the standard errors), this is of course wishful thinking on my part: it simply won’t happen any time soon. What isn’t wishful thinking, however, is that we have immediate opportunities to use these techniques to build the tables and other displays we create for our clients, supervisors, and colleagues. We can most certainly use them to simplify and clarify information for patients and the general public, too.

Bottom line? Tufte reminds us yet again of the power of simplicity, and that showing less often reveals so much more.

Untitled-1

 

 

 

 

 

Footerindesign

How Big Data Has Changed 5 Boston Industries

By: 

Emerging technologies have unlocked access to massive amounts of data, data that is mounting faster than organizations can process it. Buried under this avalanche of analytics are precious nuggets of information that organizations need to succeed. Companies can use these key insights to optimize efficiency, improve customer service, discover new revenue sources, and more. Those who can bridge the gap between data and business strategy will lead in our new economy.

Big Data’s potential impact on enterprises and industries as a whole is boundless. This potential is already being realized here in the Hub. Boston has been ahead of the curve when it comes to Big Data, thanks to our unique innovation ecosystem or our “Big Data DNA,” the Massachusetts Technology Leadership Council says. As a result, Boston is home to an especially high concentration of Big Data startups, but also powerhouse industries that have strategically leveraged analytics and transformed the space.

Check out how data and analytics has changed these five Boston industries.

1. Marketing & Advertising

Marketing & Advertising

In our age of online marketing, marketers have access to mountains of data. Pageviews, clicks, conversion, social shares…the list is endless. That doesn’t even account for the demographic data marketers collect and interpret every day.

These analytics have enabled marketers to access a more comprehensive report of campaign performances and in-depth view of buyer personas. Armed with these insights, marketers are able to refine their campaigns, improve forecasts, and advance their overall strategy.

Big Data also enables targeted marketing, a crucial component of today’s online strategy. You know those eerily accurate advertisements on your Facebook page? You can thank Big Data for that.

Analytics have unlocked enormous potential for marketers to better create, execute, and forecast campaigns. As a result, Boston has boomed with organizations entirely devoted to providing data-driven marketing solutions. HubSpot and Jumptap have emerged as leaders in this space, raising about $2.5 billion combined. Attivio, Visible Measures, DataXu are also leading marketing solutions providers.

2. Healthcare

Healthcare

It shouldn’t surprise that healthcare represents a top industry in Boston’s Big Data ecosystem. The healthcare industry collects and analyzes enormous volumes of clinical data on a daily basis. Partners Healthcare alone has some two billion data elements from over six thousand patients, according to the Massachusetts 2014 Big Data Report.

Big Data’s impact can be seen first and foremost with the electronic health record. Big Data has launched the electronic health record into the twenty-first century, revolutionizing patient care, and empowering the success of companies like athenahealth based in Watertown.

“The meaningful use of electronic health records is key to ensuring that healthcare focuses on the needs of the patient, is delivered in a coordinated manner, and yields positive health outcomes at the lowest possible cost,” the report said.

The space has expanded even more since Massachusetts passed legislation requiring all providers to adopt electronic health records and connect to the health information exchange, Mass HIway in 2012.

The Shared Health Research Informatics Network (SHRINE) is another local innovation linking five hospitals (Beth Israel Deaconess Medical Center, Children’s Hospital Boston, Brigham and Women’s, Massachusetts General Hospital and the Dana Farber Cancer Center) in a centralized database to improve efficiency and quality of care.

After genomic data and patient data from electronic medical records, medical devices like pacemakers or a Fitbit, for example, are the fastest-growing sources of healthcare data. All of these rich sources of information can – and are – being leveraged by Boston healthcare providers to improve care and lower costs.

 

3. Government

Government

The State of Massachusetts and the City of Boston lead the nation with a sophisticated public sector approach to data and analytics. Governor Patrick made Big Data part of policy, launching Massachusetts Big Data Initiative and supporting Mass Open Cloud Initiative, a public cloud that utilizes an innovative open and customizable model.  In 2009, the Commonwealth launched the “the Open Data Initiative” inviting the public to access the government’s data library from nearly every department.

But analytics’ impact on the public sector is only beginning. Big Data can significantly improve the quality and efficiency of city services, and do so at a lower cost. But most importantly, data will unlock the future of urban living. Imagine if we knew the location of every bus, train, car, and bike in real-time? Imagine if we knew the profiles of every city building? This is the vision of Boston’s future as a “connected city” outlined in Mass Technology Leadership Council’s 2014 report Big Data & Connected Cities.

“Boston is making great strides in using technology to improve how city services are delivered but we can and will do more,” said Boston Mayor Marty Walsh about MassTLC’s report.  “We are making vast amounts of the city’s big data available online to the public to not only increase transparency but to also spur innovation.”

Walsh has shown support for a data-driven, connected city and plans to hire a City of Boston Chief Digital Officer to help make this vision a reality.

4. Energy

Energy

Big Data is a big reason Boston has evolved as a leader in the energy industry. Tapping into Big Data yields much more comprehensive, accurate reports of energy usage and also illuminates how these building can operate more efficiently. As a result, the industry has boomed with companies helping buildings go green to save green, including local leaders EnerNoc, Retroficiency, and NextStepLiving. Buildings in Boston and beyond are being constructed or retrofitted with building automation systems – cloud-based, centralized control centers – which collect massive amounts of data, report on energy consumption in real-time, and can continually adjust building performance for optimum efficiency. This “smart” living is the wave of the future and entirely driven by Big Data.

5. Financial Services

Financial Services

Financial services is the fifth largest vertical for Big Data in Massachusetts. Big Data has made it possible to analyze financial data sets that previously weren’t accessible. Financial analysts now can examine and interpret unprecedented amounts of information and do so in new and innovative ways. For example, stock traders can collect and mine mass amounts of social media information to gauge public sentiment about products or companies, Information Week said.

Top companies Fidelity Investments, Pricewaterhouse Coopers, Baystate Financial, LLC and others in Boston’s financial services sector heavily depend on big data to compile reports, forecast market future, and guide their decisions.

Footerindesign

Is an Average of Averages Accurate? (Hint: NO!)

by: Katherine S Rowell author of “The Best Boring Book Ever of Select Healthcare Classification Systems and Databases” available now!

Originally posted: http://ksrowell.com/blog-visualizing-data/2014/05/09/is-an-average-of-averages-accurate-hint-no/

Today a client asked me to add an “average of averages” figure to some of his performance reports. I freely admit that a nervous and audible groan escaped my lips as I felt myself at risk of tumbling helplessly into the fifth dimension of “Simpson’s Paradox”– that is, the somewhat confusing statement that averaging the averages of different populations produces the average of the combined population. (I encourage you to hang in and keep reading, because ignoring this concept is an all too common and serious hazard of reporting data, and you absolutely need to understand and steer clear of it!)

hand drawing blue arrowImagine that we’re analyzing data for several different physicians in a group. We establish a relation or correlation for each doctor to some outcome of interest (patient mortality, morbidity, client satisfaction). Simpson’s Paradox states that when we combine all of the doctors and their results, and look at the data in aggregate form, we may discover that the relation established by our previous research has reversed itself. Sometimes this results from some lurking variable(s) that we haven’t considered. Sometimes, it may be due simply to the numerical values of the data.

First, the “lurking variable” scenario. Imagine we are analyzing the following data for two surgeons:

  1. Surgeon A operated on 100 patients; 95 survived (95% survival rate).
  1. Surgeon B operated on 80 patients; 72 survived (90% survival rate).

At first glance, it would appear that Surgeon A has a better survival rate — but do these figures really provide an accurate representation of each doctor’s performance?

Deeper analysis reveals the following: of the 100 procedures performed by Surgeon A,

  • 50 were classified as high-risk; 47 of those patients survived (94% survival rate)
  • 50 procedures were classified as routine; 48 patients survived (96% survival rate)

Of the 80 procedures performed by Surgeon B,

  • 40 were classified as high-risk; 32 patients survived (80% survival rate)
  • 40 procedures were classified as routine; 40 patients survived (100% survival rate)

When we include the lurking classification variable (high-risk versus routine surgeries), the results are remarkably transformed.

Now we can see that Surgeon A has a much higher survival rate in the high-risk category (94% v. 80%), while Surgeon B has a better survival rate in the routine category (100% v. 96%).

Let’s consider the second scenario, where numerical values can change results.

First, imagine that every month, the results of a patient satisfaction survey are exactly the same (Table 1).

patient-satisfaction-survey-table1

The Table shows that calculating an average of each month’s result produces the same result (90%) as calculating a Weighted Average (90%). This congruence exists because each month, the denominator and numerator are exactly the same, contributing equally to the results.

Now consider Table 2, which also displays the number of responses received from a monthly patient-satisfaction survey, but where the number of responses and the number of patients who report being satisfied differ from month to month. In this case, taking an average of each month’s percentage allows some months to contribute to or affect the final result more than others. Here, for example, we are led to believe that 70% of patients are satisfied.

patient-satisfaction-survey-table2

All results should in fact be treated as the data-set of interest, where the denominator is Total Responses (2,565) and the numerator is Total Satisfied (1,650). This approach correctly accounts for the fact that there is a different number of values each month, weights them equally, and produces a correct satisfaction rate of 64%. That is quite a difference from our previous answer of 6% — almost 145 patients!

How we calculate averages really does matter if we are committed to understanding our data and reporting it correctly. It matters if we want to identify opportunities to improve, and are committed to taking action.

As a final thought about averages, here is a wryly amusing bit of wisdom on the topic that also has the virtue of being concise. “No matter how long he lives, a man never becomes as wise as the average woman of 48.” -H. L. Mencken.

I’d say that about sums up lurking variables and weighted averages — wouldn’t you?

– See more at: http://ksrowell.com/blog-visualizing-data/2014/05/09/is-an-average-of-averages-accurate-hint-no/#sthash.WCltUtKb.dpuf

Untitled-1

© 2023 Brandeis GPS Blog

Theme by Anders NorenUp ↑

Protected by Akismet
Blog with WordPress

Welcome Guest | Login (Brandeis Members Only)