Total Papers (n=45) | ||
---|---|---|
patient_type | number | percent |
AC | 19976 | 24.4% |
CDT | 9610 | 11.8% |
ST | 52119 | 63.8% |
total | 81705 | NA |
Intermediate-Risk Papers (n=20) | ||
---|---|---|
patienttype^number^percent^ |AC|8873|75.9%| |CDT|1929|16.5%| |ST|883|7.5%| |total|11685|14.3% (of $n{total}$) |
RCT Trials Only (n=17) | ||
---|---|---|
patienttype^number^percent^ |AC|1101|49.8%| |CDT|78|3.5%| |ST|1031|46.7%| |total|2210|2.7% (of $n{total}$) |
This means that the number of CDT patients from RCTs is only $\frac{n{CDT}}{n{total}}=\frac{78}{81611}=0.096\%$ of the study total!!
The paper utilized a network meta-analysis (1,2,3).
They list that ”[t]he primary analysis compared CDT and systemic fibrinolysis with AC alone.“
However, they report the CDT vs AC and ST vs AC outcomes, not the network of all three.
Interestingly, they do NOT report p values for their efficacy outcome – just 95% CI.
Publication inconsistency for their efficacy outcome was significant ($p = 0.036$), but there was no inconsistency at the loop level using a loop inconsistency plot.
Thus, they had to perform a direct meta-analysis. For this analysis, they reported p values (?!). Why would they only report p-values for a “backup” analysis method.