Curvilinear relationship between variables

curvilinear relationship | guiadeayuntamientos.info

curvilinear relationship between variables

A curvilinear theory suggests that the relationship between religiosity and death anxiety is curvilinear. That is, both high and low scorers on religiousness report. How do you model interactions of continuous variables with regression? What is the difference between a moderator and a mediator? Materials. Linear vs. Items 1 - 33 of 33 A curvilinear relationship is a type of relationship between two variables that has a pattern of correspondence or association between the two.

We model interaction terms by computing a product vector that is, we multiply the two IVs together to get a third variableand then including this variable along with the other two in the regression equation. A graph of the hypothesized response surface: Note how the regression line of Y on X2 becomes steeper as we move up values of X1. Also note the curved contour lines on the floor of the figure. This means that the regression surface is curved. Here we can clearly see how the slopes become steeper as we move up values of both X variables.

When we model an interaction with 2 or more IVs with regression, the test we conduct is essentially for this shape. There are many other shapes that we might think of as representing the idea of interaction one variable influences the importance of the otherbut these other shapes are not tested by the product term in regression things are different for categorical variables and product terms; there we can support many different shapes.

Pedhazur's Views of the Interaction In Pedhazur's view, it only makes sense to speak of interactions when 1 the IVs are orthogonal, and 2 the IVs are manipulated, so that one cannot influence the other. In other words, Pedhazur only wants to talk about interactions in the context of highly controlled research, essentially when data are collected in an ANOVA design. He acknowledges that we can have interactions in nonexperimental research, but he wants to call them something else, like multiplicative effects.

Nobody else seems to take this view. The effect is modeled identically both mathematically and statistically in experimental and nonexperimental research. True, they often mean something different, but that is true of experimental and nonexperimental designs generally.

If we follow his reasoning for independent variables that do not interact, we might as well adopt the term 'main effect' for experimental designs and 'additive effect' for nonexperimental designs. I don't understand his point about not having interactions when the IVs are correlated. Clearly we lose power to detect interactions when the IVs are correlated, but in my view, if we find them, they are interpreted just the same as when the IVs are orthogonal.

But I may have missed something important here Conducting Significance Tests for Interactions The product term is created by multiplying the two vectors that contain the two IVs together. The product terms tend to be highly correlated with the original IVs. Most people recommend that we subtract the mean of the IV from the IV before we form the cross-product.

This will reduce the size of the correlation between the IV and the cross product term, but leave the test for increase in R-square intact.

It will, however, affect the b weights. When you find a significant interaction, you must include the original variables and the interaction as a block, regardless of whether some of the IV terms are nonsignificant unless all three are uncorrelated, an unlikely event.

Regress Y onto X1 and X2. Test whether the difference in R-square from steps 2 and 3 is significant. Alternatively, skip step 2 and check whether the b weight for the product term is significant in step 3, that is, in a simultaneous regression with Type III sums of squares. In summary, we suggest that the relationship between work pressure and state CSE is inverted U-shaped; it peaks at moderate levels and declines at low and high levels of work pressure.

Work pressure has an inverted U-shaped within-person relationship with state CSE. Although there is to the best of our knowledge only one within-person study on the positive relationship between CSE and task performance Debusscher et al. An important reason for the positive relationship between CSE and task performance is that individuals who are high on CSE are better at setting goals, working toward them, and are as a result more motivated to perform their jobs.

Indeed, both in a lab experiment and a field study, Erez and Judge demonstrated that CSE related to task motivation, persistence, goal setting, goals commitment, activity level, and task performance. Building on these findings, we hypothesize that day-to day variation in state CSE relates positively to day-to day variation in task performance, which, when combined with the foregoing hypotheses, implies that state CSE is expected to mediate the curvilinear within-person relationship between work pressure and task performance.

State CSE mediates the inverted U-shaped within-person relationship between work pressure and task performance. This expectation follows from the conceptualization of traits as individual differences in the sensitivity to situational provocation.

curvilinear relationship between variables

Building on the idea of traits as situational sensitivities, we argue that trait CSE relates to contingent units of CSE i. That is, for a person high in trait CSE, we expect the level of state CSE to be less contingent upon the level of work pressure because they are less susceptible to it. This reasoning is in line with the finding that people high in trait neuroticism react more strongly to negative environmental features than people low in neuroticism, even when confronted with relatively small problems Suls and Martin, ; Debusscher et al.

curvilinear relationship between variables

In the same vein, Bolger and Schilling demonstrated that people high in trait neuroticism have an increased reactivity to stressful situations. Finally, for self-esteem, it has been shown that people high in trait self-esteem are protected from the effects of external factors Mossholder et al. As emotional stability being the counterpart of neuroticismhigh self-esteem, and high self-efficacy are indicators of high CSE, these findings suggest that people high in trait CSE might be less susceptible to variation in work pressure than low trait CSE people.

Materials and Methods Participants Fifty-five employees 33 women from different Belgian companies participated in the study. On average, respondents were Fifteen participants had a secondary school degree, 12 completed a higher professional education, and 28 completed higher academic education. In terms of job content, 16 worked in logistics and distribution, 13 in governmental and non-profit organizations, 6 in health care, 6 in telecom, 4 in the financial sector, 1 in chemistry and pharmacy, 3 in human resources, 2 in communication, and 4 in other jobs.

Ten participants worked part-time seven participants worked 4 days, one participant worked 3 days, and two worked 2. We recruited participants in several ways. We posted a call on the intranet of the Flemish education networks, in the alumni newsletter of the Vrije Universiteit Brussel, and we emailed personal contacts. In these calls, we explained the goal of the study and stressed that the anonymity of records would be ensured.

We only contacted people again who indicated that they were willing to participate in the study via email or orally. Participants were enrolled in a day daily diary study in which trait CSE was measured at baseline, while work pressure, state CSE, and task performance were assessed daily. For the daily diary part, participants received an email each working day including a link to a survey in which they had to report on their level of work pressure, state CSE, and level of task performance, and they did so for 10 consecutive working days.

At the beginning of each survey, we again stressed that the data would be made anonymous. Moreover, participants could stop participating in the study whenever they wanted. All scales, as well as the items within each scale, were randomized.

Curvilinear Relationship definition | Psychology Glossary | guiadeayuntamientos.info

To allow for a momentary or state measure of CSE, we slightly adapted the items e. Work Pressure Work pressure was measured using the three-item scale of Bakker et al. Similar to the state CSE scale, we slightly adapted it to allow for daily ratings of work pressure e.

Task Performance Task performance was measured using the seven-item task performance subscale of Williams and Anderson Similar to the state CSE scale, we slightly adapted it to allow for momentary self-ratings of performance e.

curvilinear relationship between variables

Analyses Because of the complexity of the mediation model, we first tested all hypothesized relationships separately using two-level regression analyses with the lme4 package in R Bates, All level-1 predictors i. This procedure ensures that the level-1 predictors contain within-person variability only, which is necessary because the hypotheses regarding the relationships between work pressure, state CSE, and task performance pertain to the within-person level.

To test whether the effect of the level-1 predictors was consistent across individuals, we tested whether a model with a random slope on the between-person level fitted our data significantly better than a model without random slopes. Next, the hypotheses were tested simultaneously using Bayesian two-level path modeling in Mplus version 7.

curvilinear relationship between variables

Moreover, it allows testing complicated models. An important difference between Bayesian and the more traditional—frequentist—approach is that Bayesian analysis does not yield p-values and confidence intervals. Instead, for each parameter in the model, Bayesian analysis yields a posterior distribution, which shows the probability distribution of the parameter given the data Kruschke et al. Based on these posterior distributions, credibility intervals can be constructed.

These credibility intervals include a predefined percentage of the posterior distribution e. For our Bayesian analysis, we will draw on these credibility intervals to help deciding which parameter values should be deemed credible or not Kruschke et al.

These ICCs show, for each level-1 variable, the proportion of variation due to between- and within-person differences. Overall, the ICCs show that a substantial part of the variability in work pressure, state CSE, and task performance is due to within-person differences.

Means, standard deviations, intra-class correlations and correlations for all study variables. Next, we tested the hypothesized relationships by means of a series of two-level regression analyses. First, we tested whether within-person fluctuations in work pressure relate in an inverted U-shaped way to within-person fluctuations in task performance i.

To do so, we predicted momentary task performance from work pressure and work pressure squared work pressure was person-centered before computing the squared effect. Moreover, we tested whether these relationships varied across individuals. Next, we tested whether there is an inverted U-shaped within-person relationship between work pressure and state CSE i.

This was done by adding the main effect of trait CSE as well as the interaction between trait CSE and the linear component of work pressure to the previous model. A graphical representation of this moderation effect is shown in Figure 2which shows that the level of state CSE of people high on trait CSE is less affected by the level of work pressure these people experience.

Finally, we tested a model in which momentary task performance was predicted by state CSE, work pressure, and work pressure squared 1. X-ray of a tortoise, showing eggs. Graph of clutch size number of eggs vs. Graphing the results As shown above, you graph a curvilinear regression the same way you would a linear regression, a scattergraph with the independent variable on the X axis and the dependent variable on the Y axis.

In general, you shouldn't show the regression line for values outside the range of observed X values, as extrapolation with polynomial regression is even more likely than linear regression to yield ridiculous results. For example, extrapolating the quadratic equation relating tortoise carapace length and number of eggs predicts that tortoises with carapace length less than mm or greater than mm would have negative numbers of eggs.

curvilinear relationship between variables

Similar tests Before performing a curvilinear regression, you should try different transformations when faced with an obviously curved relationship between an X and a Y variable. A linear equation relating transformed variables is simpler and more elegant than a curvilinear equation relating untransformed variables. You should also remind yourself of your reason for doing a regression.

If your purpose is prediction of unknown values of Y corresponding to known values of X, then you need an equation that fits the data points well, and a polynomial regression may be appropriate if transformations do not work. However, if your purpose is testing the null hypothesis that there is no relationship between X and Y, and a linear regression gives a significant result, you may want to stick with the linear regression even if curvilinear gives a significantly better fit. Using a less-familiar technique that yields a more-complicated equation may cause your readers to be a bit suspicious of your results; they may feel you went fishing around for a statistical test that supported your hypothesis, especially if there's no obvious biological reason for an equation with terms containing exponents.

Spearman rank correlation is a nonparametric test of the association between two variables. It will work well if there is a steady increase or decrease in Y as X increases, but not if Y goes up and then goes down.

Polynomial regression is a form of multiple regression. In multiple regression, there is one dependent Y variable and multiple independent X variables, and the X variables X1, X2, X In polynomial regression, the independent "variables" are just X, X2, X3, etc.

How to do the test Spreadsheet I have prepared a spreadsheet that will help you perform a polynomial regression. It tests equations up to quartic, and it will handle up to observations. Web pages There is a very powerful web page that will fit just about any equation you can think of to your data not just polynomial. R Salvatore Mangiafico's R Companion has sample R programs for polynomial regression and other forms of regression that I don't discuss here B-spline regression and other forms of nonlinear regression.

SAS To do polynomial regression in SAS, you create a data set containing the square of the independent variable, the cube, etc.

Curvilinear Relationship

It's possible to do this as a multiple regressionbut I think it's less confusing to use multiple model statements, adding one term to each model. There doesn't seem to be an easy way to test the significance of the increase in R2 in SAS, so you'll have to do that by hand.

If R2i is the R2 for the ith order, and R2j is the R2 for the next higher order, and d. It has j degrees of freedom in the numerator and d. Here's an example, using the data on tortoise carapace length and clutch size from Ashton et al. So the quadratic equation fits the data significantly better than the linear equation. Once you've figured out which equation is best the quadratic, for our example, since the cubic and quartic equations do not significantly increase the R2look for the parameters in the output: References X-ray of a tortoise from The Tortoise Shop.

Geographic variation in body and clutch size of gopher tortoises. Testing the risk-disturbance hypothesis in a fragmented landscape: