Here is a keras code sample that uses it:

`from keras.constraints import max_norm model.add(Convolution2D(32, 3, 3, input_shape=(3, 32, 32), border_mode='same', activation='relu', kernel_constraint=max_norm(3)))`

**Answer**

From http://cs231n.github.io/neural-networks-2/#reg:

Max norm constraints.Another form of regularization is to enforce an absolute upper bound on the magnitude of the weight vector for every neuron and use projected gradient descent to enforce the constraint. In practice, this corresponds to performing the parameter update as normal, and then enforcing the constraint by clamping the weight vector →w of every neuron to satisfy ‖ Typical values of c are on orders of 3 or 4. Some people report improvements when using this form of regularization. One of its appealing properties is that network cannot “explode” even when the learning rates are set too high because the updates are always bounded.

**Attribution***Source : Link , Question Author : Jatin , Answer Author : Franck Dernoncourt*