KudoZ home » English to Swedish » Medical

powering

Swedish translation: stöd

Advertisement

Login or register (free and only takes a few minutes) to participate in this question.

You will also have access to many other tools and opportunities designed for those who have language-related jobs
(or are passionate about them). Participation is free and the site has a strict confidentiality policy.
GLOSSARY ENTRY (DERIVED FROM QUESTION BELOW)
English term or phrase:powering
Swedish translation:stöd
Entered by: Lisa Lloyd
Options:
- Contribute to this entry
- Include in personal glossary

14:31 Nov 10, 2003
English to Swedish translations [PRO]
Medical / statistik
English term or phrase: powering
This information was used as an assumption in
developing the protocol and statistical powering of the study. The results also provide important information
about the efficacy of cationic peptides in killing bacteria.

Vad är powering?

Cheers, Lisa :o)
Lisa Lloyd
Local time: 18:07
uppbackning, stöd
Explanation:
måste de mena, tror jag, alltså att ta fram statistik ur studiens samlade data som får den att framstå som mer imponerande och "köttig". Det är bara 8 googleträffar på det engelska uttrycket "statistical powering", varav en verkar vara den text du jobbar med.

--------------------------------------------------
Note added at 2003-11-10 14:48:43 (GMT)
--------------------------------------------------

...av den anledningen tror jag att \"powering\" inte är en glosa som hör statistikvärlden till, utan bara ett sätt att uttrycka \"öka tyngden\" hos studien.

--------------------------------------------------
Note added at 2003-11-10 15:15:01 (GMT)
--------------------------------------------------

Detta är en länk som kräver registrering, därför kopierar jag in innehållet trots att det är långt:

Powering the Study

To power a study, a series of assumptions based on prior experience with the treatments being compared are used to make estimates of the likely effects that can be expected and then plan for a sufficient number of participants (the sample size) to minimize the false-positives (alpha, or type 1, errors) and the false-negatives (beta, or type 2, errors). In HIV treatment trials, minimizing both of these types of error is critical because given the large number of possible combinations, it is likely that most comparative trials will be performed only on a single occasion; thus, it behooves the investigator to get it right the first time.

In a superiority trial, significance testing focuses on asserting the null hypothesis, the hypothesis that there is no true difference among the compared groups. Similarly, a noninferiority design is one in which the assertion is that a trial is unable to detect a superior or equivalent result. In these circumstances, if no difference is found among the study groups, it may be because there is no true difference or it may be because the study was not sufficiently large and by chance the outcome indicated similarity. (Noninferiority is not the same as equivalence; lack of a difference is not the same as evidence of no difference.)

To decide whether there is no difference or whether the similarity was by chance, the alpha level is established, representing the maximum probability of making a false-positive error that is acceptable. In general, the alpha level is set at P = .05, so there is no more than a 1 in 20 probability that the outcome has occurred by chance. However, when interim analyses have been performed (as is all too common in HIV trials, where analyses often seem to be performed for the purposes of submitting conference abstracts rather than for valid statistical or safety reasons) or when multiple comparisons are made in data (as was described in AIDS Clinical Trials Group 384[4] and 2NN[5]), a higher (more rigorous) alpha level should be sought. This adjustment for multiple comparisons, known as the Bonferroni correction, aims to limit the possibility that if 20 statistical comparisons are made with an error probability of 1 in 20, then by chance 1 will be an erroneous false-positive result.

When mean differences are compared, such as with a t test, and a P value of less than .05 is observed, there is little interest in the false-negative, or beta, level. However, if the sample size is too small and a nonsignificant alpha level is obtained, it may have been caused by a false- negative result. By convention, when designing trials, the beta level, the acceptable level for getting a false-negative result, is set at 20% -- that is, the study will have a 20% chance of missing a true-positive finding. The smaller the beta level the investigator is willing to accept, the larger the sample size needed for the study to be adequately powered.

http://www.medscape.com/viewarticle/451677_3
Selected response from:

EKM
Sweden
Local time: 19:07
Grading comment
statistiskt stöd blir nog bra - tack Mårten!
4 KudoZ points were awarded for this answer

Advertisement


Summary of answers provided
3 +1uppbackning, stödEKM
2underbyggnadReino Havbrandt


  

Answers


14 mins   confidence: Answerer confidence 3/5Answerer confidence 3/5 peer agreement (net): +1
uppbackning, stöd


Explanation:
måste de mena, tror jag, alltså att ta fram statistik ur studiens samlade data som får den att framstå som mer imponerande och "köttig". Det är bara 8 googleträffar på det engelska uttrycket "statistical powering", varav en verkar vara den text du jobbar med.

--------------------------------------------------
Note added at 2003-11-10 14:48:43 (GMT)
--------------------------------------------------

...av den anledningen tror jag att \"powering\" inte är en glosa som hör statistikvärlden till, utan bara ett sätt att uttrycka \"öka tyngden\" hos studien.

--------------------------------------------------
Note added at 2003-11-10 15:15:01 (GMT)
--------------------------------------------------

Detta är en länk som kräver registrering, därför kopierar jag in innehållet trots att det är långt:

Powering the Study

To power a study, a series of assumptions based on prior experience with the treatments being compared are used to make estimates of the likely effects that can be expected and then plan for a sufficient number of participants (the sample size) to minimize the false-positives (alpha, or type 1, errors) and the false-negatives (beta, or type 2, errors). In HIV treatment trials, minimizing both of these types of error is critical because given the large number of possible combinations, it is likely that most comparative trials will be performed only on a single occasion; thus, it behooves the investigator to get it right the first time.

In a superiority trial, significance testing focuses on asserting the null hypothesis, the hypothesis that there is no true difference among the compared groups. Similarly, a noninferiority design is one in which the assertion is that a trial is unable to detect a superior or equivalent result. In these circumstances, if no difference is found among the study groups, it may be because there is no true difference or it may be because the study was not sufficiently large and by chance the outcome indicated similarity. (Noninferiority is not the same as equivalence; lack of a difference is not the same as evidence of no difference.)

To decide whether there is no difference or whether the similarity was by chance, the alpha level is established, representing the maximum probability of making a false-positive error that is acceptable. In general, the alpha level is set at P = .05, so there is no more than a 1 in 20 probability that the outcome has occurred by chance. However, when interim analyses have been performed (as is all too common in HIV trials, where analyses often seem to be performed for the purposes of submitting conference abstracts rather than for valid statistical or safety reasons) or when multiple comparisons are made in data (as was described in AIDS Clinical Trials Group 384[4] and 2NN[5]), a higher (more rigorous) alpha level should be sought. This adjustment for multiple comparisons, known as the Bonferroni correction, aims to limit the possibility that if 20 statistical comparisons are made with an error probability of 1 in 20, then by chance 1 will be an erroneous false-positive result.

When mean differences are compared, such as with a t test, and a P value of less than .05 is observed, there is little interest in the false-negative, or beta, level. However, if the sample size is too small and a nonsignificant alpha level is obtained, it may have been caused by a false- negative result. By convention, when designing trials, the beta level, the acceptable level for getting a false-negative result, is set at 20% -- that is, the study will have a 20% chance of missing a true-positive finding. The smaller the beta level the investigator is willing to accept, the larger the sample size needed for the study to be adequately powered.

http://www.medscape.com/viewarticle/451677_3

EKM
Sweden
Local time: 19:07
Native speaker of: Native in SwedishSwedish
PRO pts in pair: 1934
Grading comment
statistiskt stöd blir nog bra - tack Mårten!

Peer comments on this answer (and responses from the answerer)
agree  Gorel Bylund
16 mins
  -> Tack så mycket! :-)
Login to enter a peer comment (or grade)

4 hrs   confidence: Answerer confidence 2/5Answerer confidence 2/5
underbyggnad


Explanation:
statistisk underbyggnad


    Reference: http://www.health.fi/tapaturmapaiva/svenska/material99.html
Reino Havbrandt
Sweden
Local time: 19:07
Native speaker of: Native in SwedishSwedish, Native in FinnishFinnish
PRO pts in pair: 5026
Login to enter a peer comment (or grade)




Return to KudoZ list


KudoZ™ translation help
The KudoZ network provides a framework for translators and others to assist each other with translations or explanations of terms and short phrases.



See also:



Term search
  • All of ProZ.com
  • Term search
  • Jobs
  • Forums
  • Multiple search