OnPage https://www.onpage.com Thu, 16 Feb 2017 20:32:30 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.2 Who’s on First? https://www.onpage.com/whos-on-first-improving-hospital-workflow/ Thu, 16 Feb 2017 20:14:39 +0000 https://www.onpage.com/?p=28011 Improving hospital workflow through OnPage alerting At many points in a hospital’s functioning, workflow touches the outcome. Good workflow leads to good outcomes. And, similar to IT, good workflows are indicative of a lean organization. One particular area of the hospital that needs to feel the hand of technology and workflow improvement is doctor and … Continued

The post Who’s on First? appeared first on OnPage.

]]>

Improving hospital workflow through OnPage alerting

At many points in a hospital’s functioning, workflow touches the outcome. Good workflow leads to good outcomes. And, similar to IT, good workflows are indicative of a lean organization. One particular area of the hospital that needs to feel the hand of technology and workflow improvement is doctor and on-call scheduling. Traditional workflow has on-call scheduling committed to a whiteboard. However, in a field that is hungry for improvement, medicine can surely optimize this process and make it leaner.

What are ways for improving scheduling doctors? What are the potential impacts from improvement? Read on.

How poor scheduling makes for poor hospital workflow

Scheduling doctors is an important part of a hospital’s functioning. It’s how doctors know who’s on-call first and what’s on second, to paraphrase Abbott and Castello. Yet in traditional doctor scheduling  information is committed to a whiteboard or a few sheets of paper. Any change to the schedule is remedied with pen and paper or marker. This process though is inefficient at best because it leads to an extra step when trying to reach a doctor on their pager. Rather than immediately contacting the individual on call, the nurse either goes to the whiteboard or looks for the most recent printout of the schedule.

Furthermore, since pagers are often used to alert the doctors, doctors are at the mercy of what is becoming an increasingly unreliable technology.  We have written extensively on the problems with pagers in hospitals such as our recent post on Land of the Walking Dead Zone. Problems with pagers are focused around issues such as:

  • Pages can be blocked by the physical infrastructure in hospitals and the surrounding environment
  • Doctors receive alerts on pages and then need to bring the conversation to their cell phone
  • There is no way to confirm that a doctor has received a page

So, when you take a poor workflow like the one described above and marry it to a poor technology, you will inevitably introduce significant problems into the healthcare setting.

Overall, the conclusion is that using a whiteboard for care coordination simply leads to poor workflow. Clearly, a technology is needed to automate the process and eliminate processes which introduces the possibility of error.

Improve scheduling and improve hospital workflow

Our recent case study on SAGE Neurohospitalist Group highlights the benefits of improved workflow in a hospital setting. SAGE is a teleneurology group based in California that provides neurology services to rural and underserved communities. Formerly, when one of its client hospitals needed a neurologist, the client hospital would:

  1. Contact the office administrator at SAGE indicating they needed a neurologist
  2. The office admin at SAGE would look for the on-call neurologist’s number and page them
  3. If the neurologist received the page, they would answer. However, the office admin never knew if the page was received or not
  4. If the neurologist was busy with another patient, they would need to wait until they were free to answer the page

This whole process could cause up to a 20 minute delay in answering a rural hospital’s request for neurological care.

Realizing the inefficiencies of this process, SAGE introduced our product into their workflow. Now, when a patient arrives at a rural hospital in SAGE’s network, the hospital simply calls up the SAGE OnPage account associated with their hospital.  Now, in under one minute, a neurologist is contacted and able to return the rural hospital’s call.

There are no longer any administrators at SAGE that need to look up a doctor’s pager number. Instead, a rural clinic simply dial’s into SAGE’s OnPage account and whoever is the attending neurologist for the day will receive the call. The rural hospital only needs to keep track of one number and with that phone number they will immediately page the neurologist on-call. If the first neurologist is unable to answer their call then the page will be forwarded to the second neurologist on-call.

By using this improved workflow, SAGE has:

  1. Not experienced missed pages ever since implementing OnPage.
  2. Not lost pages due to connectivity and range issues are no longer a problem as OnPage works with Wifi.
  3. Enabled page escalation. Escalation is now an automated process. If the first SAGE neurologist is not able to respond to an alert, the page is automatically forwarded to the next neurologist on-call.
  4. Been able to guarantee their clients a response in under 5 minutes although 95% of responses occur in one minute or less.
  5. All Enabled tracking through OnPage’s Audit Trail. Increasing the accountability of SAGE’s service.
  6. Ensured that of the pages are attended to within the first two escalations which means patients receive care faster.
  7. Realized a business growth of 700%!

By improving the workflow, not only did SAGE reach patients faster, it also improved its business.

Conclusion

Clearly, lean workflow cannot be ignored. It’s too important a step to overlook since its impact, as seen with SAGE, can be transformative to both lives and the bottom line.

We’ll be at HIMSS17 next week and would look forward to the opportunity so schedule some time with you to talk about workflow. We’re at booth #343. Sign up!

The post Who’s on First? appeared first on OnPage.

]]>
OnPage Supports Teleneurology through Telemedicine Pager https://www.onpage.com/onpage-supports-teleneurology-through-telemedicine-pager/ Wed, 15 Feb 2017 22:08:30 +0000 https://www.onpage.com/?p=27963 SAGE NeuroHospitalist is a privately held California-based company that provides rural clinics with neurological services through telemedicine. SAGE currently employs 15 physicians and works with 30 hospitals in small towns in California, Arizona and Nevada. They provide coverage to rural hospitals in their network 24/7, 365 days per year. SAGE is an OnPage power user and … Continued

The post OnPage Supports Teleneurology through Telemedicine Pager appeared first on OnPage.

]]>
telemedicine pager

SAGE NeuroHospitalist is a privately held California-based company that provides rural clinics with neurological services through telemedicine. SAGE currently employs 15 physicians and works with 30 hospitals in small towns in California, Arizona and Nevada. They provide coverage to rural hospitals in their network 24/7, 365 days per year. SAGE is an OnPage power user and we want to tell their story.

Inefficient paging process and response delays

SAGE went through a convoluted process when trying to connect a patient with a neurologist. Before OnPage, SAGE went through the following steps:

  1. A patient comes to a rural hospital in need of neurological care.
  2. A doctor or nurse at the rural hospital would contact the SAGE administrator
  3. The administrator at SAGE would jot down the rural hospital’s information and page the neurologist with this information
  4. If the SAGE neurologist was available, he or she would answer the page and contact the rural hospital.
  5. If the neurologist was busy with another patient, they either didn’t respond to the page or would call back the SAGE administrator with instructions on how to respond to the hospital’s request
  6. The whole process could take as long as 20 minutes from the time a rural hospital contacted SAGE until a neurologist contacted the rural hospital. This wasted valuable time.

Missing pages and lack of automation

SAGE neurologists had difficult with pagers as well. To begin with;

  1. Pages were lost and there was no infrastructure in place to track pages.
  2. Delays in responding to received pages were a regular occurrence with no system in place to forward the page to another SAGE neurologist
  3. SAGE experienced pager connectivity and range issues.

Solution: OnPage’s HIPAA compliant alerting and teleneurology paging!

Currently, SAGE uses the ability provided by OnPage to enable clinics to reach neurologists with an immediate and prominent alert while also providing escalation. SAGE saw significant improvements after implementing OnPage:

  1. SAGE has not experienced missed pages ever since implementing OnPage.
  2. Pages lost due to connectivity and range issues are no longer a problem as OnPage works with Wifi.
  3. Page escalation is now an automated process. If the first SAGE neurologist is not able to respond to an alert, the page is automatically forwarded to the next neurologist on-call.
  4. SAGE is now able to guarantee their clients a response in under 5 minutes although 95% of responses occur in one minute or less. 
  5. All pages are tracked through OnPage’s Audit Trail. Increasing the accountability of SAGE’s service.
  6. All of the pages are attended to within the first two escalations which means patients receive care faster.
  7. The automation of alerts and improved workflow resulting from OnPage has increased SAGE’s growth by 700%!

Improving business process through OnPage reporting

At SAGE’s central headquarters the company’s Operations Administrator, Melinda Chiem, keeps track of the call volume, response time and any delays that occur. She receives this information through OnPage’s reporting system which allows her to also create graphs to physically represent the progress of the company’s neurologists in meeting the needs of the hospitals. The reporting also allows SAGE to better analyze and forecast the clinics’ needs.

To learn more about our solution…

The post OnPage Supports Teleneurology through Telemedicine Pager appeared first on OnPage.

]]>
 5 Powerful lessons for Start-ups from Tom Brady   https://www.onpage.com/5-powerful-lessons-for-start-ups-from-tom-brady/ Tue, 07 Feb 2017 19:08:59 +0000 https://www.onpage.com/?p=27892 Sunday night’s stunning victory by the New England Patriots over the Atlanta Falcons was a real nail biter. At the end of the first half, I had been ready to call it quits and avoid the pain of watching my favorite hometown team get destroyed by Atlanta. Atlanta was ahead 21-3. I missed Lady Gaga because … Continued

The post  5 Powerful lessons for Start-ups from Tom Brady   appeared first on OnPage.

]]>

Sunday night’s stunning victory by the New England Patriots over the Atlanta Falcons was a real nail biter. At the end of the first half, I had been ready to call it quits and avoid the pain of watching my favorite hometown team get destroyed by Atlanta. Atlanta was ahead 21-3.

I missed Lady Gaga because I was sure the Pats were going to lose. My daughter however was the true fan and refused to leave the couch. At the end of the 3rd quarter, the score was 28-9. It was only in the beginning of the fourth quarter that I began to hear cheers coming out of the living room and I started to wonder if I had written off the team, my team, too fast.

And indeed I had. In the fourth quarter, Tom Brady executed phenomenal passing when he got the ball to Danny Amendola followed by a 2 point conversion to James White to bring the score to 28-20. Then Brady tied the game when he passed to James White for a touchdown pass and followed with a 2-point conversion from Danny Amendola. The first overtime ever and Patriots teammate James White ran for the deciding touchdown.

The emotional rollercoaster of the game made me think that there are some real and powerful lessons that we can learn from the phenomenal play shown by Tom Brady and his teammates. For us in particular at OnPage, I think there are a lot of useful take home lessons about grit and persistence..

Lesson 1: Stay Cool but Don’t Relax

This is actually a lesson that pertains to both Brady and Matt Ryan.

Alex Mack of Atlanta said, “You’ve got to be able to finish, and it’s an unfortunate lesson to have to learn.” Atlanta practically had the game in hand. But, they got tired and couldn’t maintain their focus after the 3rd quarter.  Atlanta just couldn’t finish what they started. They relaxed the pressure on New England and lost their edge. And that was just the opening that New England needed.

At the same time, Brady and the Patriots could have let their dismal performance through the first half bring them down and let them think that they didn’t deserve to win the Superbowl. But, Brady didn’t lose his cool or his focus. As teammate Danny Amendola said, “He was the same as he always is: cool, calm and collected.” With that focus and persistence, the Pats were able to overcome what seemed like an insurmountable lead by Atlanta.

Lesson 2: Don’t let failure get in the way of success

One of the reasons Brady was able to keep his team focused was by telling them to keep their mind in the game. ‘Keep fighting’ he told his teammates. The Pat’s failure to get momentum going on the field during the first half could have stopped them from feeling like they had a chance of winning. Their first four drives included a sack of Tom Brady and a fumble.

However, the Pats just kept up the pressure on the Atlanta defense. The Pats stayed focused on their goal and didn’t let the Falcon’s interception of Brady’s throw stop them from staying focused and on course.

Lesson 3: Believe in the impossible

Some thought that since there had never been a team that won after being down by more than 10 points during the Superbowl, the Patriots couldn’t win after they were at 21-3 after the first half. Some also thought that since there had never been an overtime in a Superbowl game, there couldn’t be an overtime in this game.

But ‘never before’ doesn’t mean it isn’t going to happen. It just means it hasn’t happened yet.

Brady and team showed that the latter was true. It just hadn’t happened yet. Brady and company’s amazing focus and determination were second to none Sunday night. They just kept on believing in themselves and in their ability to win.

Lesson 4: NEVER Give UP

Which brings me to point number four. NEVER GIVE UP. Simply put, the Pats never gave up. They didn’t give up when they were down by 18 points at the half. They didn’t give up when they had to make up 19 points in the 4th quarter. They didn’t give up in overtime. They persisted and persisted and never gave up until they won.

Lesson 5: Maintain your humility

This last lesson isn’t one that doesn’t get discussed enough in lessons on start-ups. There’s a fair amount of braggadocio in sports and in the start-up world. Brady never speaks as one who believes he deserves all the credit or as if he is the GOAT. Instead, he always credits his team and his coaches and hard work. He never credits his own brilliance or superior strength. Instead, he maintains his humility.

For start-ups, humility is an important ingredient. Humility is what allows start-up leaders to realize that just because they have seen some success, they are never done learning.  Startups always need to be ready to take criticism, change and move forward.

Conclusion

Start-ups don’t have a Superbowl to prove their success. Instead, success is shown through growing the business, being acquired or going public or some combination of those three. We look forward to learning the lessons of Brady and company and to our own continued growth.

 

 

 

The post  5 Powerful lessons for Start-ups from Tom Brady   appeared first on OnPage.

]]>
Workflow in Healthcare Discussion with Charles Webster MD https://www.onpage.com/workflow-in-healthcare-discussion-charles-webster/ Mon, 06 Feb 2017 15:41:13 +0000 https://www.onpage.com/?p=27836 Without workflow in healthcare, data is just a bottleneck In the weeks leading up to HIMSS I have been trying to get a pulse on the themes that will be driving the conversation at the Conference. One of the themes I have heard discussed at length is the concept of improving workflow in healthcare. Workflow … Continued

The post Workflow in Healthcare Discussion with Charles Webster MD appeared first on OnPage.

]]>
workflow in healthcareWithout workflow in healthcare, data is just a bottleneck

In the weeks leading up to HIMSS I have been trying to get a pulse on the themes that will be driving the conversation at the Conference. One of the themes I have heard discussed at length is the concept of improving workflow in healthcare. Workflow extends itself to many aspects of medicine. However, the facet that interested me most is why healthcare has started to focus on this theme that has been prevalent in many other fields for decades?

Indeed, workflow has also been a topic that we have discussed at several points in our own discussion of healthcare and IT topics.  Given this overlap between the workflow conversation at HIMSS17 and writings on our blog, I decided to reach out to Dr. Charles Webster who is one of the HIMSS17 Social Media Ambassadors.

Charles is a bit of a Renaissance man with degrees in accountancy, computational linguistics, industrial engineering, artificial intelligence and medicine. He calls himself Dr. Workflow. He writes his blog at wareflo.com and lectures and teaches on improving workflows in healthcare.

The excerpts below represent a summary of our conversation.

Q: How did you get interested in the field of workflow?

I started off studying pre-med and accounting when I went to college. So, I got degrees in accountancy. I also got degrees in industrial engineering, artificial intelligence and medicine. if you look at the intersection of all this, it’s health IT and clinical decision support and workflow. I guess that makes me Dr. Workflow.

Q: Was it immediately apparent to you when you studied medicine that the field needed an improvement in workflow?

Yes.

But, I didn’t set out to be a workflow expert.  When I was going for a PhD in Health Systems Engineering, my advisor who had a PhD in applied math and did studies on improving healthcare, bemoaned that none of the doctors she worked with took her seriously. She recommended that I get an MD. So, I went to the University of Chicago Medical School. While in medical school, I got to see the workflows of the various fields: dermatology, psychiatry, etc.  And after medical school, I worked at a hospital for a few years in MIS and then for an EHR vendor where I worked in general medicine. So I got a very good view of how medicine flows and its needs for improvement.

I think my background in industrial engineering is what glued it all together. With that background, I was able to effectively analyze the inefficiencies in medicine.

Almost 6-7 yrs ago, I saw the workflow industry and healthcare IT industry increasingly overlap and I saw the opportunity to bring to healthcare what I had done for one EHR (electronic health record) client. At HIMSS and in all my social media and blogs. I am trying to emphasize the importance of workflow in healthcare.

Q: How much are doctors thinking about workflow?

Like medicine, workflow also has its Triple Aims:

  • Educate healthcare about workflow. Tools do not equal workflow; they just help achieve workflow.
  • Find success stories in workflow
  • Bring individuals from outside of health IT into health IT with interesting ideas and products about workflow. These are the individuals who are helping doctors think about workflow.

Everywhere I go at healthcare conferences, everyone in the halls is talking about workflow.  They ask ‘how are you doing your workflow’?

Now, when I look at websites of companies attending HIMSS, they are all talking about workflow. Doctors are thinking about workflow. It has just taken a while.

Q: How do you see the impact of workflow in healthcare?

In 2011, there was virtually no mention of workflow in HIMSS exhibitor websites. Now 1/3 to 1/2 of websites mention workflow integration. The growth of EHR accounts for much of the growth in workflow technology and discussion from 2011 to today. In 2011, the rush towards EHR started but without a workflow to manage the barrage of data, healthcare hit the wall. The only way to go forward was to embrace a paradigm shift which focuses on workflow.

Data is fighting the workflow war. Health IT is 50% data + 50% workflow. Data isn’t a panacea. It needs workflow.

Q: Why is healthcare behind IT in adoption of workflow technologies?

Health IT is about ½ generation behind other IT verticals. This is because health IT has been so large for so long and reimbursed in cost plus fashion. There was no competitive pressure. Health IT presented such a large economy and it didn’t have much contact with the rest of IT. As such, it had little competitive pressure to influence it.

Q: Outside of healthcare, where else do you see workflow’s importance?

Indeed we see workflow tech showing up everywhere. It’s in DevOps. It’s in SecOps.

Devops is all about using orchestration technology to model processes such as deployment to cloud. SecOps is all about applying orchestration technology to cybersecurity – before, during, and after security incidents – for example, when an incident happens, we can push out patches, take off permissions and use post-mortems

DevOps and SecOps demonstrate an evolution of architectures. They are taking workflow out of individual applications and sharing it among applications.

Conclusion

We will continue to bring you other interviews from thought leaders at HIMSS in the days leading up to the HIMSS Conference. Sign up to meet with us at HIMSS.

To learn more about OnPage’s workflow integrations and technology, contact us or visit us at HIMSS17 at booth 343.

You can tweet to Charles at @wareFLO or visit him at booth 7785, the very first HIMSS conference makerspace, during HIMSS17.

 

The post Workflow in Healthcare Discussion with Charles Webster MD appeared first on OnPage.

]]>
A very short primer on DIY, technical debt and DevOps alerting https://www.onpage.com/primer-on-diy-technical-debt-and-devops-alerting/ Wed, 01 Feb 2017 19:58:41 +0000 https://www.onpage.com/?p=27702 A cautionary tale Faced with limited financing and a high burn rate, many startups focus on product development and application coding at the expense of back of operations engineering. The reasons for this focus are understandable to some extent. Companies need to develop product and unseasoned CEOs don’t always see the value in investing in … Continued

The post A very short primer on DIY, technical debt and DevOps alerting appeared first on OnPage.

]]>

A cautionary tale

Faced with limited financing and a high burn rate, many startups focus on product development and application coding at the expense of back of operations engineering. The reasons for this focus are understandable to some extent. Companies need to develop product and unseasoned CEOs don’t always see the value in investing in IT Ops. Some call this movement towards operating without IT Ops, “NoOps” or “serverless”.

Yet there are “dusty old concepts”, as Charity Majors calls them, that arise when companies fail to worry about things like scalability and graceful degradation and think that those will take care of themselves. The problem becomes even more significant when developers try to remedy the issues that arise from not having an Ops team by creating DIY tools to remedy the shortfall. To paraphrase the poet Robert Browning, their reach is further than their grasp. And with this reach comes technical debt.

What is technical debt

Martin Fowler has a good notion of technical debt. He describes it as follows:

You have a piece of functionality that you need to add to your system. You see two ways to do it, one is quick to do but is messy – you are sure that it will make further changes harder in the future. The other results in a cleaner design, but will take longer to put in place.

In this explanation, you can see the tradeoff made. The quick and messy way sets up technical debt, which like financial debt has implications for the future. If we choose not to do away with the technical debt, we will continue to pay interest on the debt. In the case of development it means we will often have to go back to the that quick and dirty piece of technology and pay with extra effort that wasn’t really necessary.

Alternatively, developers can invest in better design and, in the case of this argument, bring in IT operations to worry about the important things Ops worry about, like scalability, graceful degradation, queries and availability.

Unfortunately, DIY practitioners often don’t invest the time in thinking through this tradeoff. Given financial circumstances, rushed deadlines, short-sightedness or some combination there of, they choose the quick and dirty option.

A cautionary DIY tale

My colleague Andrew Ben who is OnPage’s VP of Research and Development spoke with Nick Simmonds, the former Lead Operations Engineer at Datarista. Nick described his experience at a company he used to work at where one of his first jobs was to gain control over a DIY scaling tool that had been developed in-house. The tool was created before any operations engineers had been hired. As such the tool was designed to be a “quick and dirty” method of provisioning servers.

According to Simmonds,’ the faults of the tool were significant. For example, the tool was designed to eliminate the need for manual scaling of microservices, but instead the tool simply spun up new instances with no code on them at all. Furthermore, when the servers were spun down, the tool never checked as to whether the code was working on the newest servers before it destroyed the old ones. And, as the tool didn’t efficiently push code to the new instances, the company was left with new servers that had no code running on them.

A significant part of the problem with the tool Nick’s colleagues built is that the tool didn’t come with any alerting component. His team only recognized the failure when they were in production and live. No monitoring, no alerting. Nothing was in place to let Simmonds’ team know the deployment had failed.

Never try DIY on DevOps alerting

I don’t want this cautionary tale to cause nightmares for any young DevOps engineers out there. I wouldn’t want that on my resume. Instead, I want to impress upon application developers the need to be mindful of how “quick and dirty” impacts future operations and releases. Tools shouldn’t be created as a temporary hack until you get ops gets on board.

Teams should invest the time into creating a robust piece of code or tool. Alternatively, if they don’t have the time, they should try to invest in tools that accomplish the desired result. Furthermore, and this is something Nick brought out in his interview with Andrew, never try to hack together a tool for monitoring and alerting.

Alerting is too important and complex to leave to a hack or technical debt. For example, here are some of the main points a robust alerting tool needs to accomplish:

  1. Doesn’t rely on email for alerting: Alerting through email is a way for emails to get lost. Just remember that Target’s engineers received an email alert indicating anomalous traffic several days before they noticed the extent of theft of their users’ credit card information. The email alert got downplayed.
  2. Persistent: Alerts need to continue in a loud and persistent manner until they are responded to.
  3. Has escalation: If the person singled out for the alert cannot answer it, the alert needs to escalate to the next person on-call
  4. Elevates above the noise: There is so much noise from alerts in IT, that the alert needs to grab your attention. This elevation can be through redundancies that alert you via phone, email and app (or) by being loud enough to wake you up at 2am
  5. Creates actionable alerts: When alerts are sent to the engineer, make sure the alerts come with actions that the recipient should follow. Simple ‘alert’ statements don’t help rectify the situation.
  6. Time stamps: For reporting and service improvement reasons, you want to know when an alert was sent and at what time it was received .
  7. Enables attachments: You want to be able to attach text files or images to alerts to amplify the amount of useful data sent in the messages.
  8. Uses web hooks or APIs so it can grow: Enable your alerting tool to grow as your application grows so that it can integrate with softwares that enhance your capabilities.

Alternatives to DIY alerting

Alerting is more than just enabling an email to show up in your inbox. Email alerting is useful only if your job requires you to have your eyes glued to email at all times. No letting down your guard. Instead you need to maintain constant vigilance.

If you instead wish to invest in a robust alerting tool that has all of the capabilities mentioned above already fully tested and working, then contact OnPage. We have alerting figured out.

OnPage is a critical alerting and incident notification platform used by DevOps and IT practitioners. Download a free trial to get started on the path to better incident management.

The post A very short primer on DIY, technical debt and DevOps alerting appeared first on OnPage.

]]>
SAGE’s Telemedicine Embraces OnPage’s Critical Alerting https://www.onpage.com/sages-telemedicine-embraces-onpage-critical-alerting/ Mon, 30 Jan 2017 19:48:28 +0000 https://www.onpage.com/?p=27602 OnPage brings Secure Critical Messaging to Telemedicine SAGE Neurohospitalist Group began in 2012 when two neurologist recognized that there was a growing shortage of neurologists. This shortage was most acute in small towns and underserved communities. So, in a call back to the aphorism of “if the mountain won’t come to Mohammed …”,the SAGE’s founders … Continued

The post SAGE’s Telemedicine Embraces OnPage’s Critical Alerting appeared first on OnPage.

]]>

OnPage brings Secure Critical Messaging to Telemedicine

SAGE Neurohospitalist Group began in 2012 when two neurologist recognized that there was a growing shortage of neurologists. This shortage was most acute in small towns and underserved communities. So, in a call back to the aphorism of “if the mountain won’t come to Mohammed …”,the SAGE’s founders brought in telemedicine to deliver neurology services to small and underserved hospitals.

The need for neurology services has never been greater as the incidence of brain disorders linked to aging, such as stroke and neurodegenerative disorders, soars. Today’s shortfall of 11% will jump to 19% by 2025, and will have a profound impact as more patients enter the healthcare system. Today, 1 in 6 Americans are affected by neurological disease. Telemedicine has the unique ability to serve the gap between need and availability.

Time is of great importance when providing teleneurology to those in need. In neurology, time is brain. Every second lost due to delay could impact brain health. By using OnPage, SAGE was able to bring immediate secure messaging to the clinics in its network.

What is telemedicine

According to Dr. John Hamalaka, CIO at Harvard Medical School, telemedicine can take many forms. It can be a video teleconferencing between clinicians, a consult between patient and clinician, or a group of physicians consulting with another group of physicians. It can be a video teleconferencing between clinicians, a consult between patient and clinician, or a group of physicians consulting with another group of physicians. It can also be secure texting to coordinate patient care. Telemedicine can be all these things. What is defined though is the explosive growth of telemedicine. Dr. Hamalaka expects that 2017 will see an “exponential growth” in telemedicine as healthcare tries on new models for keeping people healthy.

One field of medicine that is effected by the growth of telemedicine is neurology. At present, there is a shortage of trained neurologists, particularly in rural and underserved communities throughout the U.S. In part, this is because of the shortage of doctors training in this field. Coupled with this reality is the aging population which tends to have a greater need for neurological services. Teleneurology is the solution for many rural and underserved communities to remedy their lack of neurologists on staff.

SAGE has been ahead of the exponential growth curve. By introducing telemedicine to these underserved communities, doctors can quickly determine a diagnosis and see if it makes sense to transfer a patient to a major hospital for cases such as stroke or epilepsy.   Left untreated, patients would have a terrible prognosis and most likely experience a pretty high mortality rate. But treated within a reasonable timeframe through telemedicine, patients can be diagnosed and transferred to a major hospital if necessary.

How OnPage helps SAGE

One of the obstacles SAGE faced was the provision of a fool-proof method with which to route alerts to physicians when one of the hospitals in its network had an immediate need for its teleneurology services. SAGE neurologists needed to find a tool that allowed the clinics in their program to contact a neurologist immediately. OnPage filled that need.

Stroke is the most frequent condition which SAGE physicians handle. When a patient does present with stroke, it is critical to get in touch with a neurologist as quickly as possible. If a clinic in the SAGE network receives a patient requiring immediate attention from a neurologist, the clinic calls up the SAGE OnPage account.

Steps for a neurologist receiving an OnPage alert

  1. SAGE’s neurologist are entered into an on-call schedule. There are typically 3 physicians in a schedule
  2. Clinic in the SAGE network calls SAGE’s OnPage account. The clinic’s phone number is sent in the OnPage message to the SAGE neurologist.
  3. If the first neurologist in the on-call schedule is unavailable as he/she is consulting with another clinic, the call escalates to the next doctor on call
  4. The doctor receives the clinic’s phone number and calls back to consult. Further introduction of video conferencing is introduced as needed.
  5. From the time a neurologist is contacted until he/she calls back the hospital, no more than 5 minutes will have elapsed.

The clinic is assured of receiving a response from an on-call physician within 5 minutes. When the doctor gets in touch with the facility and the patient, the doctor will then examine the patient and consult with on-site physicians.

The future of telemedicine

At the upcoming HIMSS2017 Conference, telemedicine is one of the major themes. Many of the social media ambassadors (SMAs) such as Tamara StClaire and Lygeia Ricciardi are discussing telemedicine and its future as key aspects of the future of effective healthcare. Telemedicine’s importance is highlighted by its ability to create virtual visits for patients and thus improve outcomes by increased ease of access to healthcare.

Online journal Health IT News makes it clear that telemed has the capability to impact healthcare for millions of people, especially those in more remote communities and regions. Some of the hesitation of working with telemedicine can be attributed to social factors. Mainly, people aren’t used to receiving care via conference hook-up. However, as people get more comfortable with the technology on both the provider side and the patient side of the transaction, the practice will grow.

Telemedicine, SAGE and OnPage

In the future, telemedicine will increasingly become a feature of our standard healthcare. OnPage has shown the usefulness of its smartphone pager app in enabling SAGE’s provision of telemedicine for rural and underserved communities. We see secure messaging and critical alerting tools increasingly becoming part of the telemedicine package.

SAGE’s effective and powerful service model demonstrates how healthcare can work around its inherent shortages to provide neurology services in an immediate and secure fashion. OnPage is proud to be instrumental in SAGE’s success.

Read our full SAGE case study.

The post SAGE’s Telemedicine Embraces OnPage’s Critical Alerting appeared first on OnPage.

]]>
What everybody should know about log analysis and effective critical alerting https://www.onpage.com/what-everybody-should-know-about-log-analysis-and-effective-critical-alerting/ Wed, 25 Jan 2017 15:05:11 +0000 https://www.onpage.com/?p=27514 The Great Wall of China began construction in 7 B.C. to protect the Chinese kingdom from Eurasian warriors. Chinese soldiers would marshal forces to protect the Great Wall from enemy attack by using smoke signals to send alerts from tower to tower. This method of alerting enabled messages to be sent to garrisons hundreds of miles … Continued

The post What everybody should know about log analysis and effective critical alerting appeared first on OnPage.

]]>
critical alerting

The Great Wall of China began construction in 7 B.C. to protect the Chinese kingdom from Eurasian warriors. Chinese soldiers would marshal forces to protect the Great Wall from enemy attack by using smoke signals to send alerts from tower to tower. This method of alerting enabled messages to be sent to garrisons hundreds of miles away in just a few hours time. With these alerts, soldiers could prepare to convene and combat their enemies.

How can teams effectively analyze this vast amount of data from their various systems?  How can they use this data to troubleshoot issues when they do arrive? How can they use this data to prepare for the IT dangers they know of and those that are unforeseen? Furthermore, how can they make sure they are alerted when serious issues and even dangers arrive?

Why is Log Analysis Important for IT teams?

While most developers and DevOps teams believe in the importance of log analysis, they consider it akin to eating spinach – it’s good for you but do we really have to do it? While log analysis contains a lot of important information on how the system is behaving, analyzing logs is a lot of work. But, avoiding this analysis is dangerous. Without this careful analysis, a company cannot recognize the threats and opportunities that lie before it.

Most companies run off multiple servers and have numerous devices providing logs to inform them about troubleshooting issues, monitoring, business intelligence and SEO.  Furthermore, as written in a previous article,  IT infrastructure continues its move to public clouds such as Amazon, Microsoft Azure and Google Cloud. As such, it becomes more difficult to isolate issues. And since there is a lot of fluctuation of server usage in the cloud based on the specific loads, environments, and number of active users, obtaining an accurate reading can become quite difficult.

Yet by centralized log analysis, you have a way to normalize the data in one database and acquire a sense of how the system’s “normal state” operates. Log analysis can provide insight into cloud based services as well as localized systems. The analysis provides the knowledge of how the network looks when it is humming along. Knowing baseline traffic, companies then have a sense of how to view the outliers. What should our site traffic be like? What error logs  are normal and consistent with system traffic and which are causes for alarm? Having answers to these questions enables engineers to make data-informed decisions.

Furthermore, logs and log analysis can provide insight into many key points of information throughout deployment. Analytics can be used to understand system logs, webserver logs, error logs, and app logs. Logs provide us with a way to see traffic, incidents or events over time. By including log analysis as part of healthy system monitoring, the seemingly impossible process of reading logs and responding to their information becomes possible. By enabling log analysis, companies can optimize and debug system performance and give essential inputs around bottlenecks in the system.

Where does ELK come in

There are several software packages out there that provide log analysis capabilities. Some large enterprises use packages such as Splunk and Sumo Logic. Yet these packages can get quite expensive at high scale. Instead, many in the DevOps community have moved towards using the ELK (Elasticsearch, Logstash and Kibana) stack for their log analysis. ELK components can be used separately. But, when joined together, they give users the ability to run log analysis on top of open sourced software that everyone can run for free.

ELK has many advantages over competitors – it is open source, easy to set up and provides fast performance. Of additional value is the visibility it offers into the overall IT stack. When numerous servers are running multiple applications as well as virtual machines, you need a way to easily view and analyze problems. ELK provides this opportunity in a low cost way that correlates metrics with logs.

Example of ELK solutions

One of the biggest challenges of building an ELK deployment is making it scalable. Given a new product deployment or upgrade, traffic and downloads to a site might conceivably skyrocket. Ensuring this influx doesn’t kill the system requires that all components of the ELK stack scale as well. Ideally, you would have a tool which combines these 3 components into a viable stack that is integrated in the cloud so that scaling and security are taken care of. This is where a hosted ELK solution like Logz.io or Elastic Cloud steps in. Logz.io is built on top of Amazon’s AWS and enables this very type of scaling.

Additionally, when running a large environment, problems can originate from the network and cause an interruption in the application. Trying to correlate these issues can be very complicated and time consuming. The ELK Stack is useful in these cases because it provides a method to bring in data from multiple sources and create rich visualizations.

Where critical alerting and OnPage come in

Operational analysis is one of the more common use cases for ELK. DevOps engineers and site reliability engineers can get notifications of events such as whenever traffic is significantly higher than usual or the error rate exceeds a certain level. Logz.io, has several pre-built alerts for notifying when these happen. These alerts go to Slack or email.

Yet there is also the need to alert beyond Slack channels and email. Yes, teams are awake and monitoring systems during normal business hours. And, yes, a product like Logz.io has AI capabilities as well as crowd sourcing capabilities to help flag logs that matter. But even with this level of orchestration, teams cannot catch every system overload, every potential DDoS attack, every memory leak, every server failure. Receiving alerts about complex issues such as these is a key part of completing the picture for DevOps.

There needs to be a way to alert the DevOps on-call engineer or the IT service tech who can respond to the alert, both during and after business hours. These alerting tools need to have the following capabilities:

  • Alerts need to continue until they are responded to
  • Low and high priority. Not all alerts are created equal. There needs to be a differentiation between these two types so that high priority alerts come through at any hour. Low priority alerts can wait until normal business hours.
  • Alerts need to come with information on which system sent the message along with a time stamp.
  • Message exchange. Alerting tools need to also provide holders with the ability to message one another.
  • If the alerted individual is unable to answer the critical notification, then the tool needs to automatically go to the next person in the on-call group
  • Audit trail. In order to improve future responses to alerting as well as to provide painless post-mortems on recent alerts, the alerting tool needs to provide an audit trail detailing who received the messages and the responses that they did or did not provide.

Fortunately, OnPage Corporation’s critical alerting tool provides this level of insight and capability. Many IT shops have used OnPage’s capabilities to enable critical alerting on their systems to avoid missing important alerts and cut response time. OnPage can also be integrated with Logz.io to provide this additional level of alerting for DevOps.

Logs and IT alerting – better together

Clearly there is great and growing value in collecting and analyzing log data for IT planning, operations, and security. And while there are still challenges to be faced, best practices are emerging to help everyone understand what to expect and how to get the most returns on investments into log data collection and analysis tools.

Moving forward, it is fair to expect that an integral part of future IT planning will be enabling further correlations and analysis for known and unknown issues. As these capabilities arise, it will be important to have the log analysis tools that can scale along with the growth, as well as the critical alerting tools to alert teams when issues arise.

OnPage is a critical alerting and incident notification platform used by DevOps and IT practitioners. Download a free trial to get started on the path to better incident management.

The post What everybody should know about log analysis and effective critical alerting appeared first on OnPage.

]]>
Land of the Walking Dead Zone https://www.onpage.com/land-of-the-walking-dead-zone-pager-replacement/ Tue, 24 Jan 2017 16:19:00 +0000 https://www.onpage.com/?p=27481 Why pager replacement is still an issue OnPage has what some might call a “hate/hate” relationship with pagers. Not much room for love. As we see it, pagers are an antiquated bit of technology. Pagers are dinosaurs which, like most dinosaurs, should be extinct by now. You might be wondering why we’re at it again … Continued

The post Land of the Walking Dead Zone appeared first on OnPage.

]]>
pager replacement

Why pager replacement is still an issue

OnPage has what some might call a “hate/hate” relationship with pagers. Not much room for love. As we see it, pagers are an antiquated bit of technology. Pagers are dinosaurs which, like most dinosaurs, should be extinct by now.

You might be wondering why we’re at it again with our anti-dinosaur campaign? Haven’t we made our point in previous articles and thought pieces? Well, to be frank, the answer is NO. You see, last week we came across a great article in Computer World discussing the “dead zone” issue.  A dead zone is an area where you cannot receive pages due to outside interference from technology or the environment you’re in.

Reading this article, we just couldn’t contain ourselves. See, the article reiterated all that we’ve rallied against in using pagers; why they are unreliable, why they can’t be trusted in an emergency, why they are obsolete. That kind of talk made us realize that the fight is still ongoing and that we need to bring the issue up once again.

What part of dead zone did you not understand?

This great Computer World article retold the told the tale of how a new hire is handed a pager on his first day of work at an IT company and told he will be on-call. The notion that pagers have problems and dead zones doesn’t seem to penetrate the consciousness of his manager:

I knew this particular paging provider didn’t work at my home. [I tell my] manager about the problem. Manager doesn’t actually say so, but makes it obvious that she doesn’t believe [me].

It then comes as no surprise that when there is an emergency, the new hire doesn’t receive the page at home. Readers of our OnPage blog won’t be surprised by this detail. Readers of our blog will also be sympathetic to the new hire’s getting heat from his boss when he didn’t answer the overnight page.

The reality is that environmental factors play a significant role in the ability of pagers to work. And, if you are in a job – IT operations, security, healthcare – where you need to respond to critical alerts, a dead zone can spell disaster.

Why pagers don’t work

What the Computer World article brings to the forefront is that pagers are an unreliable method for getting critical alerts through to their intended audience. Clear enough. The question that comes up though is why. Why don’t pages get through?

Well, the answer is primarily based on the technology that pagers use. Pagers use radio waves – like your radio.  And radio waves can be picky. Based on the strength of the transmission signal, the page will not be receivable outside the coverage area. Additionally, if your building has a metal roof, the radio waves might be interrupted. Radio waves can also be corrupted by interference from other electronic equipment like your pc or cellphone or microwave. Who knew pagers were so finicky?

But an even greater problem is that the pagers are not designed to repeat the page if you don’t hear it the first time. They are not a persistent form of relaying messages. Unlike the cellphone-based critical alerting which OnPage uses, pagers don’t have redundancies. So, if you don’t get the page from work the first time, you’re up that proverbial creek. Without a paddle.

Why pager replacement is the answer to dead zones

When pagers have shown themselves to be so unreliable, those who rely on pagers for critical alerts must find an alternative. OnPage’s technology is wi-fi enabled and offers redundancies that pagers don’t. So, if you are in a dead zone, OnPage offers redundancies via email, sms as well as phone to repeat the message. Really, there’s no way to miss the message.

Additionally, the technology provided by OnPage ensures that the message will continue for up to 8 hours until responded to. So, if you miss the alert for any reason and haven’t enabled the redundancies, your message is never lost with OnPage. The message stays and continues to alert you until it is responded to.

Conclusion

The way we see it, you can continue hoping that your boss understands when you don’t respond to pages or you can suggest a solution that actually works. Your explanation of dead zones might get a bit old after a while.  Your boss might even  become a bit ticked off when you continue to ignore critical alerts.

Think of it as job security. Or just plain security. Suggest a paging solution that actually works. Try OnPage.

The post Land of the Walking Dead Zone appeared first on OnPage.

]]>
Webinar: Presented by OnPage and BVoIP https://www.onpage.com/webinar-presented-by-onpage-and-bvoip/ Fri, 20 Jan 2017 19:25:31 +0000 https://www.onpage.com/?p=27434 We at OnPage understand the pain points of MSPs. When a critical alert or an urgent customer communication isn’t routed and handled in a timely fashion the results are usually chaotic. To find out what you can do about it, join Joe Beck, Director of Sales, OnPage, George Bardissi, CEO, BVoIP and Jeff Rolen, Service … Continued

The post Webinar: Presented by OnPage and BVoIP appeared first on OnPage.

]]>

We at OnPage understand the pain points of MSPs. When a critical alert or an urgent customer communication isn’t routed and handled in a timely fashion the results are usually chaotic.
To find out what you can do about it, join Joe Beck, Director of Sales, OnPage, George Bardissi, CEO, BVoIP and Jeff Rolen, Service Manager, Infinity Inc for an OnPage & BVoIP’s Joint Webinar on January 24th at 1PM EST.

To register for the event click below

The need for better alerting

We have partnered with BVoIP to bring this webinar to you and have a line of up of presenters who will be addressing a range of topics related to best practices when it comes to critical alerting. We will be walking you through:

  • what happens when monitoring tools trigger alerts
  • the time it takes for you to receive an alert
  • how time until resolution factors into faster incident resolution.

We cover the best way to deal with alerts that need to be escalated and also introduce you to some after hours solutions.

Our partner in crime

BVoIP, who will be hosting the webinar with us, is a Philadelphia-based company and a leading channel provider of Voice-as-a-Service solutions to MSPs, IT service providers and their downstream customers. The company specializes in integrating cloud and on premise voice communication services with customers’ existing technologies. George Bardissi, President & CEO of BVoIP, turned to OnPage because it provides prominent alerting coupled with voice messaging.

To learn more about the webinar on January 24th at 1PM EST click here or…

 

The post Webinar: Presented by OnPage and BVoIP appeared first on OnPage.

]]>
How to improve healthcare BYOD and HIPAA compliance https://www.onpage.com/improve-byod-hipaa-compliance/ Thu, 19 Jan 2017 14:37:12 +0000 https://www.onpage.com/?p=27278 Ensuring effective and secure communication in the age of healthcare BYOD Healthcare organizations are experiencing a significant rise in Bring Your Own Device (BYOD). In fact, Becker Hospital Review research says that 85% of healthcare workers bring their own devices to work. Yet along with this rise in BYOD comes an increased vulnerability to being hacked. … Continued

The post How to improve healthcare BYOD and HIPAA compliance appeared first on OnPage.

]]>
BYOD HIPAA compliance

Ensuring effective and secure communication in the age of healthcare BYOD

Healthcare organizations are experiencing a significant rise in Bring Your Own Device (BYOD). In fact, Becker Hospital Review research says that 85% of healthcare workers bring their own devices to work. Yet along with this rise in BYOD comes an increased vulnerability to being hacked. Mobile phones and tablets are the weakest link when it comes to security and are prone to attacks.

Lost or stolen devices add to this vulnerability. In fact, 1.4 million Americans lost and never recovered their smartphones in 2013, and 3.1 million had their mobile devices stolen. Tens of thousands of healthcare workers lose their devices each year – causing 68% of all health care data breaches. As these devices often have a mixture of personal and work related records, the problem of stolen data becomes magnified.

So how can hospitals – large and small – as well as clinics ensure effective and secure communications in the age of BYOD? Read on.

Is eliminating BYOD the answer?

With the rise of smartphones and tablets in the workplace, hackers are continuing to attack enterprises through vulnerabilities in mobile devices. As I wrote in an article earlier this month on Becker Hospital Review, some consider this a basis for eliminating BYOD from healthcare entirely. The thinking is that if healthcare employers didn’t allow BYOD, they could better control the data security and encryption that their employees use.

But eliminating BYOD is futile. The real mistake is in trying to prevent further BYOD implementation. Indeed, BYOD is a cost cutting measure embraced by many organizations. BYOD also benefits healthcare because it acknowledges the fact that people are going to bring their own devices and seek to use them in their work, as well as their personal life. Furthermore, healthcare providers can’t really afford to give a smartphone to everyone who would benefit from the device.

The actual culprit is poor mobile device hygiene. Often the mobile devices being used lack encryption or suffer from poor password management. In addition, employees have a tendency to leave their mobile devices in vulnerable locations such as the backseat of a car or on a desk or in a coffee shop. In these instances, the devices are often the object of theft. At this point, the issue is no longer BYOD.

Why security is failing

IT and security professionals now acknowledge mobile devices are a widespread vector for attack. In fact, 67 percent said their organization has likely suffered a data breach through mobile. Additionally, cyber attackers are now responsible for 31.42 percent of all major HIPAA data breaches reported in 2016, which is a 300 percent increase in the last three years. Phishing attacks, spoofed Wi-Fi attacks, malicious applications, are some of the ways in which data is compromised. The fundamental cause is that many mobile devices lack proper hygiene, and organizations often lack institutional planning for handling lost devices. While most iPhones are encrypted, only 10% of Android phones are. Additionally, IT centers typically have neither a plan nor a method for securing their physicians’ and staffs’ mobile devices. In order to stop security from failing further, healthcare organizations need to have a method for ensuring both the security of mobile devices and the content they contain.

Five ways to help ensure BYOD security.

Hospitals can prevent significant financial loss and legal and reputational risk by ensuring that mobile communications follow HIPAA guidelines. HIPAA has many specific guidelines regarding security procedures and policies, training and behaviors. But as it relates to messaging of PHI to your mobile device, HIPAA dictates are quite clear. Hospitals need to provide reasonable protection and encryption of patient information. While encryption is not insurmountable, it provides a much higher level of data security.

Here are the other steps you want to make sure you follow to ensure HIPAA compliance:

  1. Implement passcode on all mobile devices. Make sure you implement the 4 or 6 digit passcode on your device. A lost or stolen device that has been locked with a PIN or passcode is much less likely to be breached
  2. Enable remote wipe. Make sure that all messages containing patient information can be wiped from your mobile device.
  3. File Sharing. Make sure any files or images you share are through a private HIPAA compliant cloud.
  4. Encrypted messaging. Make sure all messaging to and from the device is encrypted.
  5. Data centers. Make sure your data centers’ servers are HIPAA compliant and provide end-to-end encryption

Developing and maintaining this level of compliance is not simple. That’s why there’s OnPage. Our expertise is in ensuring secure HIPAA compliant communication for healthcare institutions and their employees. OnPage ensures messages are SSL encrypted and can only be viewed by message participants. Furthermore, OnPage content has remote wipe capabilities that meet HIPAA compliance standards.

Conclusion

Healthcare organizations can achieve secure and reliable communication in healthcare. They don’t have to struggle through maintaining HIPAA compliance of their communications on their own.

Learn more about HIPAA compliant messaging so you can ensure your staff’s mobile communications are secure.

The post How to improve healthcare BYOD and HIPAA compliance appeared first on OnPage.

]]>