The Fight To Criminalize Opioid Prescribing

this graphic I dug out of my graphic library on my computer and it has a date of 2014 on it. Last year it is claimed that 110,000 OD’s from ALL DRUGS, which included some 70K-80K OD’s involved illegal fentanyl.  That number of “dead bodies” from drug OD’s is abt TWICE the number of our soldiers that died during the 12 -14 yrs that the USA was involved in the Vietnam war. and we have built monuments to honor all of those young men that lost their lives in that war.  Is it just me, or dose it seem strange that the number of reported drug OD’s is up  SIX – SEVEN times in about EIGHT YEARS, while the number of Rx opiates has been reduced by abt 60%. I guess that is what everyone gets when there is no previous experience nor educational requirements to be elected to Congress.


The Fight To Criminalize Opioid Prescribing

https://reason.com/2023/02/10/the-fight-to-criminalize-opioid-prescribing/

Over 88 percent of opioid overdose deaths now involve either heroin or fentanyl. Targeting prescriptions is not an efficient way to address mortality.

As Americans continue to die from drug overdoses at an increasing rate, policy makers are seeking interventions to reduce reckless medical practices that put patients in danger of addiction. Many of these proposed laws seek to limit the extent to which patients can access prescription opioids from their physicians. But as our new study, published this week in the Yale Law & Policy Review Inter Alia, argues, prescription opioids are not to blame for today’s crisis.

Laws that arbitrarily set duration or quantity restrictions for opioid treatment regimens are premised on the belief that increased access to prescription opioids has led to an increase in addiction and, ultimately, overdose deaths.

However, while opioid deaths continue to soar, opioid prescribing rates have decreased every year since 2012. Further, nonmedical use of prescription opioids has remained relatively stable over the past two decades. As we point out in our study, the popularized spike in reported nonmedical opioid use rates between 1998 and 2002 was most likely caused by major changes in survey methodology. And if there was a true increase in nonmedical opioid use, it would have been due to codeine, not the overprescribing of common pain relievers like OxyContin. The claim that the introduction of OxyContin in 1996 “fueled” prescription opioid use is simply not supported by the data.

Such claims about Purdue Pharma, however, were echoed during last week’s House Energy and Commerce hearing on combatting fentanyl overdoses. Rep. Tony Cárdenas (D–Calif.) prefaced his questioning with “Purdue, the Sackler family, and crime pays—still one of the richest families in the world…who was a big part of where we are today.” Rep. Mariannette Miller-Meeks (R–Iowa) shared a similar sentiment: “As we know, a lot of opioid addiction has started through post-operative care and pain management, pain relief.”

But it is illicit opioids like heroin and fentanyl, not prescription pain relievers, that are primarily to blame for today’s opioid crisis. By decreasing access to legal channels of prescribing, laws restricting the use of prescription opioids only increase the likelihood that a pain patient will need to turn to the black market to meet his demand for opioids with dangerous illicit drugs.

And yet, academics and policymakers continue to pursue reforms that would only exacerbate this issue. For example, Loyola Marymount University law professor Rebecca Delfino proposed that Congress adopt the Prescription Abuse and Prevention Act (PAPA), a law that would make it easier to prosecute physicians for violating the Controlled Substances Act (CSA). No legislation has yet to be introduced that cites PAPA, but policy makers have shown an appetite to criminalize doctors for prescribing opioids in the past.

Under the CSA and regulations pursuant to it, it is illegal for a physician to “knowingly or intentionally” prescribe a controlled substance unless the prescription is “for a legitimate medical purpose…in the usual course of his professional practice.” Like the CSA, PAPA offers an intent or knowledge mens rea standard, but also allows a “presumption of knowledge” to be established by a “doctor’s expert knowledge, practice experience, and specialized medical training.”

Simply put, if a patient overdoses on a medication that was prescribed by a physician, the event itself might be sufficient to convict the prescriber of second-degree murder. This is because the doctor’s expertise should have prevented the patient from being in such a vulnerable situation in the first place.

Although it shouldn’t be expected that prosecutors would go after every physician whose patient overdoses under PAPA, the fact that such an event would likely lead to an automatic conviction after a charge would worry many physicians. Moreover, PAPA could grant prosecutors the arbitrary authority to incarcerate any physician whose patient overdoses while under his care. Such potential liability would lead to further reductions in prescribing across the country, which would only increase illicit opioid deaths.

Last term, the Supreme Court clarified in Ruan v. United States that in order to be liable under the CSA, a physician must have “knowingly or intentionally acted in an unauthorized manner,” rather than merely intended to or known that he was prescribing some controlled substance. While the CSA regime is not perfect, the ruling in Ruan provided much-needed clarity to the standard of liability under the CSA and shielded physicians from facing prosecution for simply doing their job as authorized.

Criminalizing doctors is not the path to ending the opioid crisis. As much of the literature indicates, opioid prescriptions rarely lead to addiction when treating either acute or chronic pain. Given that over 88 percent of opioid overdose deaths now involve either heroin or fentanyl, targeting prescriptions is not an efficient way to address mortality. The way forward is to expand access to both addiction and pain patients, so that those who are in most need of a safe environment to consume opioids can do so legally.

Brussels Terror Attack Victim Euthanized at Age 23

Brussels Terror Attack Victim Euthanized at Age 23

https://www.medscape.com/viewarticle/982984

Performing euthanasia for “mental suffering that cannot be alleviated” is still considered an extraordinary measure. Indeed, fewer than 2% of the requests for euthanasia fall within that category, and few such requests are made by young patients.

There is no doubt that people will talk about the case of Shanti De Corte not only because of the reason stated in her euthanasia request but also because someone so young was able to meet the strict conditions required for the law to be applicable. It’s something that Belgian broadcaster RTBF brought up during a recent episode of #Investigation, which reported on the aftermath of the 2016 Brussels attacks.

On May 7, surrounded by her family, Shanti was euthanized. She was 23 years old. Six years earlier, on March 22, 2016, Shanti had been at Brussels Airport when terrorists set off bombs. She was in the departures area with 90 other students from Sint-Rita Campus College, located in the northern town of Kontich. Shanti was only a few meters away from the blast. Although she was not physically injured, the Flemish teen was traumatized by the attack. This was confirmed by the school psychologist who treated the students. “There were some students who reacted worse than others to these traumatic events. And having had two discussions with Shanti, I can tell you that she was one of these students who were more sensitive to the effects. To me, it’s quite clear. Even before the attacks, she’d experienced serious psychological issues. Therefore, I referred her for psychiatric care.”

Eleven Antidepressants Daily

A few weeks after that March day, Shanti was admitted to a psychiatric hospital in Antwerp. It was a place she knew well, having been an inpatient there several times before the attacks. Shanti was treated with antidepressants. She shared her thoughts about them on numerous occasions. “I get several drugs at breakfast and up to 11 antidepressants a day. I couldn’t do without them. With all the drugs I take, I feel like a ghost who doesn’t feel anything anymore. Perhaps there were solutions other than the drugs.”

It was a brief respite. In 2020, Shanti attempted suicide. Her spirits were at their lowest. She was heavily medicated, and her medication had been increased over time. She turned down therapeutic help that was offered by a therapist who specializes in treating the victims of the Brussels attacks. The student got in touch with the Life End Information Forum, an association that supports the right to die with dignity. In April 2022, Shanti submitted a new euthanasia request, stating that she was in a medically futile condition of mental suffering. Two psychiatrists granted their approval.

A Small Proportion

Last March, the Federal Commission for the Control and Evaluation of Euthanasia reported on data from 2021. “There continues to be a very small number of euthanasia requests that cite mental and behavioral disorders (psychiatric conditions, such as personality disorders, and cognitive issues, like Alzheimer’s disease, are included in this group): 1.9% of all cases of euthanasia. Like all euthanasia files, these requests meet the legal conditions (the patient is legally competent, the request is in writing, the condition is medically futile, and the suffering — which is constant, unbearable, and cannot be alleviated — results from a serious and incurable disorder; the request is well-considered and repeated),” the report states.

This claim may be incorrect.. there is 535 members between the two houses of Congress

how can you tell if an insurance company is lying to you… they tell you they are trying to save you money

Does this sound familiar ?

It has been reported that inflation over the last TWO YEARS has increased the average expenditures of a family by over $400/wk… Many small restaurants are still trying to recover from all the business shutdowns during the pandemic and increased operational costs because of inflation.  This new IRS mandate is imposing on small restaurant owners a lot more administrative costs to comply.  President Biden had proposed hiring 87,000 new (ARMED) IRS agents to go have the rich to get them to pay “their fair share”  – whatever that is. But now the IRS is going after the portion of our population that has the least resources to fight off an invasive IRS audit.

What other part of our population – that has limited resources – that several parts of the FED ALPHABET SOUP OF AGENCIES are taking advantage of this population and causing are kinds of QOL issues to those in this population directly and indirectly ?

Is it just me, or does it seem like all too many school yard bullies … really don’t grow up and mature, but gets jobs somewhere in various bureaucracies where they can continue to “push people around without consequences ” ?

 

After pushing for tip credit elimination, Biden wants to increase taxes on tipped workers

https://minimumwage.com/2023/02/after-pushing-for-tip-credit-elimination-biden-wants-to-increase-taxes-on-tipped-workers/

In his state of the union address this week, President Joe Biden doubled down on a mantra that no one earning less than $400,000 a year would pay new taxes. But the administration’s Internal Revenue Service quietly rolled out a new program to single out tipped restaurant workers to pay more taxes on tip earnings.

The IRS released a proposal for public comment on a new program that would “improve tip reporting compliance” to ensure more tips are reported as income, and taxed accordingly.

The program, called the Service Industry Tip Compliance Agreement (SITCA), seeks to use point-of-sale and electronic payment technology to track tip income received by service employees. Employers would be required to submit an annual report on tips received through these systems. This ultimately opens up employees’ tip income to IRS scrutiny and taxation.

Lawmakers rushed to voice their opinion on the proposal this week:

 

 

 

 

While the proposed program states that participation is voluntary, the IRS official announcement states participation in the program would grant protection from liability for taking a tip credit and counting tips towards minimum wage requirements. Currently, the federal Fair Labor Standards Act legally allows employers to take a tip credit as long as employees earn regular tips that amount to at least the $7.25 hourly minimum wage.

The Biden administration has vocally opposed the tip credit system that currently allows tipped service employees, many in the restaurant industry, to earn well beyond the minimum wage. President Biden championed the Raise the Wage Act in 2021, which would have eliminated the federal tip credit, and failed when a bipartisan group of lawmakers voted against the bill due to the harm it would cause tipped restaurant employees.

Now, the administration is using its IRS resources to go after employees’ tips instead of larger, prevalent issues of tax fraud.

 

DAILY REMEDY INTERVIEWS DR. NEIL ANAND, MD, ON PHYSICIAN ARREST AND DEA-DOJ DATA ANALYTICS AS A “JUNK SCIENCE”

DAILY REMEDY INTERVIEWS DR. NEIL ANAND, MD, ON PHYSICIAN ARREST AND DEA-DOJ DATA ANALYTICS AS A “JUNK SCIENCE”

https://youarewithinthenorms.com/2023/02/10/daily-remedy-interviews-dr-neil-anand-md-on-physician-arrest-and-dea-doj-data-analytics-as-a-junk-science/

REPORTED BY

youarewithinthenorms.com

NORMAN J CLEMENT RPH., DDS, NORMAN L. CLEMENT PHARM-TECH, MALACHI F. MACKANDAL PHARMD, BELINDA BROWN-PARKER, IN THE SPIRIT OF JOSEPH SOLVO ESQ., INC.T. SPIRIT OF REV. IN THE SPIRIT OF WALTER R. CLEMENT BS., MS, MBA. HARVEY JENKINS MD, PH.D., C.T. VIVIAN, JELANI ZIMBABWE CLEMENT, BS., MBA., IN THE SPIRIT OF THE HON. PATRICE LUMUMBA, IN THE SPIRIT OF ERLIN CLEMENT SR., WALTER F. WRENN III., MD., JULIE KILLINGWORTH, LESLY POMPY MD., CHRISTOPHER RUSSO, MD., NANCY SEEFELDT, WILLIE GUINYARD BS., JOSEPH WEBSTER MD., MBA, BEVERLY C. PRINCE MD., FACS., NEIL ARNAND, MD., RICHARD KAUL, MD., LEROY BAYLOR, JAY K. JOSHI MD., MBA, ADRIENNE EDMUNDSON, ESTER HYATT PH.D., WALTER L. SMITH BS., IN THE SPIRIT OF BRAHM FISHER ESQ., MICHELE ALEXANDER MD., CUDJOE WILDING BS, MARTIN NJOKU, BS., RPH., IN THE SPIRIT OF DEBRA LYNN SHEPHERD, BERES E. MUSCHETT, STRATEGIC ADVISORS

EXPOSING DOJ-DEA MANIPULATION DATA ANALYTIC-JUNK SCIENCE

THE TARGETING OF OUTLIERS

DEA’S STRONG-ARMING OF THE MEDICAL PROFESSION 

The canary in the coal mine is a metaphor alluding to the initial sign of an impending calamity. An apt metaphor for what just transpired. Recently, the Drug Enforcement Agency (DEA) issued a public safety alert informing the public about a “sharp increase in fake prescription pills containing Fentanyl and Meth[amphetamines].”

The alert was the canary in the coal mine, but the impending calamity is not what we are led to believe. We have known of a rise in counterfeit prescription pills for many years. It was a rise induced directly by DEA policies inflicted upon patients.

THE DEA ORCHESTRATED A CULTURE OF FEAR THAT CAN BE DESCRIBED AS MEDICAL MCCARTHYISM, IN WHICH THE FEAR OF PROSECUTION DEFINED THE QUALITY OF CARE.”

The impending calamity is not the risks counterfeit pills pose to the public but the legal liability the DEA faces for its failed policies.

For years federal agencies and physician advocacy groups remained in lockstep on the causes of the opioid epidemic. We heard it all: lax prescription guidelines, overprescribing by willing physicians, and the addictive nature of opioids.

A CULTURE OF FEAR AND INTIMIDATION

Unfortunately, these purported causes proved erroneous at best or deliberately misrepresented at worst. And implementation of policies based on these causes led to a rise in suicide and clinical stigmatization among chronic pain patients and patients with substance use dependencies.

FOR NOW, YOU ARE WITHIN

YOUAREWITHINTHENORMS.COM,(WYNTON MARSALIS CONCERTO FOR TRUMPET AND 2 OBOES, 1984)

THE NORMS

Surescripts: Gatekeeper for Electronic Prescribing Explained

Are AI Chatbots in Healthcare Ethical?

Are AI Chatbots in Healthcare Ethical?

https://www.medpagetoday.com/opinion/second-opinions/102987

Their use must require informed consent and independent review

Within a week of its Nov. 30, 2022 release by OpenAI, ChatGPT was the most widely used and influential artificial intelligence (AI) chatbot in history with over a millionopens in a new tab or window registered users. Like other chatbots built on large language models, ChatGPT is capable of accepting natural language text inputs and producing novel text responses based on probabilistic analyses of enormous bodies or corpora of pre-existing text. ChatGPT has been praised for producing particularly articulate and detailed text in many domains and formats, including not only casual conversation, but also expository essays, fiction, song, poetry, and computer programming languages. ChatGPT has displayed enough domain knowledge to narrowly miss passing a certifying examopens in a new tab or window for accountants, to earn C+ grades on law school examsopens in a new tab or window and B- grades on business school examsopens in a new tab or window, and to pass parts of the U.S. Medical Licensing Examsopens in a new tab or window. It has been listed as a co-author on at least fouropens in a new tab or window scientific publications.

At the same time, like other large language model chatbots, ChatGPT regularly makes misleading or flagrantly false statements with great confidence (sometimes referred to as “AI hallucinations”). Despite significant improvements over earlier models, it has at times shown evidenceopens in a new tab or window of algorithmic racial, gender, and religious bias. Additionally, data entered into ChatGPT is explicitly stored by OpenAI and used in training, threatening user privacy. In my experience, I’ve asked ChatGPT to evaluate hypothetical clinical cases and found that it can generate reasonable but inexpert differential diagnoses, diagnostic workups, and treatment plans. Its responses are comparable to those of a well-read and overly confident medical student with poor recognition of important clinical details.

This suddenly widespread use of large language model chatbots has brought new urgency to questions of artificial intelligence ethics in education, law, cybersecurity, journalism, politics — and, of course, healthcare.

As a case study on ethics, let’s examine the results of a pilot programopens in a new tab or window from the free peer-to-peer therapy platform Koko. The program used the same GPT-3 large language model that powers ChatGPT to generate therapeutic comments for users experiencing psychological distress. Users on the platform who wished to send supportive comments to other users had the option of sending AI-generated comments rather than formulating their own messages. Koko’s co-founder Rob Morris reported: “Messages composed by AI (and supervised by humans) were rated significantly higher than those written by humans on their own,” and “Response times went down 50%, to well under a minute.” However, the experiment was quickly discontinued because “once people learned the messages were co-created by a machine, it didn’t work.” Koko has made ambiguous and conflicting statements about whether users understood that they were receiving AI-generated therapeutic messages but has consistently reported that there was no formal informed consent processopens in a new tab or window or review by an independent institutional review board.

ChatGPT and Koko’s therapeutic messages raise an urgent question for clinicians and clinical researchers: Can large language models be used in standard medical care or should they be restricted to clinical research settings?

In terms of the benefits, ChatGPT and its large language model cousins might offer guidance to clinicians and even participate directly in some forms of healthcare screening and psychotherapeutic treatment, potentially increasing access to specialist expertise, reducing error rates, lowering costs, and improving outcomes for patients. On the other hand, they entail currently unknown and potentially large risks of false information and algorithmic bias. Depending on their configuration, they can also be enormously invasive to their users’ privacy. These risks may be especially harmful to vulnerable individuals with medical or psychiatric illness.

As researchers and clinicians begin to explore the potential use of large language model artificial intelligence in healthcare, applying principals of clinical research will be key. As most readers will know, clinical research is work with human participants that is intended primarily to develop generalizable knowledge about health, disease, or its treatment. Determining whether and how artificial intelligence chatbots can safely and effectively participate in clinical care would prima facie appear to fit perfectly within this category of clinical research. Unlike standard medical care, clinical research can involve deviations from the standard of care and additional risks to participants that are not necessary for their treatment but are vital for generating new generalizable knowledge about their illness or treatments. Because of this flexibility, clinical research is subject toopens in a new tab or window additional ethical (and — for federally funded research — legal) requirements that do not apply to standard medical care but are necessary to protect research participants from exploitation. In addition to informed consent, clinical research is subject to independent review by knowledgeable individuals not affiliated with the research effort — usually an institutional review board. Both clinical researchers and independent reviewers are responsible for ensuring the proposed research has a favorable risk-benefit ratio, with potential benefits for society and participants that outweigh the risks to participants, and minimization of risks to participants wherever possible. These informed consent and independent review processes — while imperfect — are enormously important to protect the safety of vulnerable patient populations.

There is another newer and evolving category of clinical work known as quality improvement or quality assurance, which uses data-driven methods to improve healthcare delivery. Some tests of artificial intelligence chatbots in clinical care might be considered quality improvement. Should these projects be subjected to informed consent and independent review? The NIH lays out a number of criteriaopens in a new tab or window for determining whether such efforts should be subjected to the added protections of clinical research. Among these, two key questions are whether techniques deviate from standard practice, and whether the test increases the risk to participants. For now, it is clear that use of large language model chatbots is both a deviation from standard practice and introduces novel uncertain risks to participants. It is possible that in the near future, as AI hallucinations and algorithmic bias are reduced and as AI chatbots are more widely adopted, that their use may no longer require the protections of clinical research. At present, informed consent and institutional review remain critical to the safe and ethical use of large language model chatbots in clinical practice.

30 companies you don’t want to work for


Is this the corporate entity and its unhappy employees you are entrusting your healthcare to ?

can artificial intelligence system emulate common sense

I asked this question to this AI BOT  https://chat.openai.com  I just saw where this bot was able pass a medical licensing board/test.  There was a piece on TV this past week of a pilot program in a hospital where cameras were monitoring pts in this particular hospital unit and the cameras fed into a AI program and it was seeing if AI could detect changes in a pt’s health status based on body language and/or movement.

Back in 1965 Gordon Moore – co-founder of Intel – declared that “computer power” would DOUBLE EVERY TWO YEARS and their cost would be CUT IN HALF EVERY TWO YEARS… that became known as “Moore’s Law”. I bought my first TI calculator in the mid-70’s and my first personal computer in the late 70’s. With the book that came with the computer I taught myself how to write software and actually end up selling some ultra specialty software packages. I have had a front row seat to watch Moore’s law evolve.

we are witnessing the next quantum leap forward in how computers will impact our lives and Moore’s Law may now be as antiquated as those computers that were around when Gordon Moore established Moore’s law.  Artificial Intelligence (AI) is here and now.  Here is the answer I got from CHATGPT when I asked the following question. This BOT has only been around a few months and it has already passed a Medical Licensing test/board…

How long before AI is involved with Electronic Medical Records and helping – or dictating – to practitioners as to what is appropriate therapy and/or overseeing insurance/pbm prior approval  process.  I asked this BOT this question, because those people dealing with subjective diseases ( pain, anxiety, depression, ADD/ADHD, mental health ) have few/no lab tests to confirm that the pt actually has the medical issue and/or the impact the disease has on the pt’s QOL. This is where a practitioner’s common sense and clinical experiences comes in to play in developing a treatment plan for a pt.

to be quite honest, artificial intelligence involved in medical care…. scares the CRAP OUT OF ME !  I got a free Alexa a few years back and I have never plugged her in nor connected her to my home internet/router, I have never had much trust in her either.

can artificial intelligence system emulate common sense ?

Currently, artificial intelligence systems do not possess common sense in the same way that humans do. Common sense refers to a broad range of implicit knowledge and understanding that people have about the world and human behavior. It enables humans to make inferences and judgments in situations that they have not encountered before.

While AI systems have made significant progress in specific domains such as image recognition, language processing, and game playing, they do not yet have the ability to understand and reason about the world in the same way that humans do. To develop AI systems with common sense, researchers would need to find ways to explicitly represent and incorporate this type of knowledge and reasoning into AI algorithms.