I have some data that is highly correlated. If I run a linear regression I get a regression line with a slope close to one (= 0.93). What I’d like to do is test if this slope is significantly different from 1.0. My expectation is that it is not. In other words, I’d like to change the null hypothesis of the linear regression from a slope of zero to a slope of one. Is this a sensible approach? I’d also really appreciate it you could include some R code in your answer so I could implement this method (or a better one you suggest!). Thanks.

**Answer**

```
set.seed(20); y = rnorm(20); x = y + rnorm(20, 0, 0.2) # generate correlated data
summary(lm(y ~ x)) # original model
summary(lm(y ~ x, offset= 1.00*x)) # testing against slope=1
summary(lm(y-x ~ x)) # testing against slope=1
```

Outputs:

```
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.01532 0.04728 0.324 0.75
x 0.91424 0.04128 22.148 1.64e-14 ***
```

```
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.01532 0.04728 0.324 0.7497
x -0.08576 0.04128 -2.078 0.0523 .
```

```
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.01532 0.04728 0.324 0.7497
x -0.08576 0.04128 -2.078 0.0523 .
```

**Attribution***Source : Link , Question Author : Nick Crawford , Answer Author : GaBorgulya*