Initializations - TFLearn (2024)

tflearn.initializations.zeros (shape=None, dtype=tf.float32, seed=None)

Initialize a tensor with all elements set to zero.

Arguments

  • shape: List of int. A shape to initialize a Tensor (optional).
  • dtype: The tensor data type.

Returns

The Initializer, or an initialized Tensor if a shape is specified.

tflearn.initializations.uniform (shape=None, minval=0, maxval=None, dtype=tf.float32, seed=None)

Initialization with random values from a uniform distribution.

The generated values follow a uniform distribution in the range[minval, maxval). The lower bound minval is included in the range,while the upper bound maxval is excluded.

For floats, the default range is [0, 1). For ints, at least maxvalmust be specified explicitly.

In the integer case, the random integers are slightly biased unlessmaxval - minval is an exact power of two. The bias is small for values ofmaxval - minval significantly smaller than the range of the output (either2**32 or 2**64).

Arguments

  • shape: List of int. A shape to initialize a Tensor (optional).
  • dtype: The tensor data type. Only float are supported.
  • seed: int. Used to create a random seed for the distribution.

Returns

The Initializer, or an initialized Tensor if shape is specified.

tflearn.initializations.uniform_scaling (shape=None, factor=1.0, dtype=tf.float32, seed=None)

Initialization with random values from uniform distribution without scalingvariance.

When initializing a deep network, it is in principle advantageous to keepthe scale of the input variance constant, so it does not explode or diminishby reaching the final layer. If the input is x and the operation x * W,and we want to initialize W uniformly at random, we need to pick W from

[-sqrt(3) / sqrt(dim), sqrt(3) / sqrt(dim)]

to keep the scale intact, where dim = W.shape[0] (the size of the input).A similar calculation for convolutional networks gives an analogous resultwith dim equal to the product of the first 3 dimensions. Whennonlinearities are present, we need to multiply this by a constant factor.See Sussillo et al., 2014(pdf) for deeper motivation, experimentsand the calculation of constants. In section 2.3 there, the constants werenumerically computed: for a linear layer it's 1.0, relu: ~1.43, tanh: ~1.15.

Arguments

  • shape: List of int. A shape to initialize a Tensor (optional).
  • factor: float. A multiplicative factor by which the values will bescaled.
  • dtype: The tensor data type. Only float are supported.
  • seed: int. Used to create a random seed for the distribution.

Returns

The Initializer, or an initialized Tensor if shape is specified.

tflearn.initializations.normal (shape=None, mean=0.0, stddev=0.02, dtype=tf.float32, seed=None)

Initialization with random values from a normal distribution.

Arguments

  • shape: List of int. A shape to initialize a Tensor (optional).
  • mean: Same as dtype. The mean of the truncated normal distribution.
  • stddev: Same as dtype. The standard deviation of the truncatednormal distribution.
  • dtype: The tensor data type.
  • seed: int. Used to create a random seed for the distribution.

Returns

The Initializer, or an initialized Tensor if shape is specified.

tflearn.initializations.truncated_normal (shape=None, mean=0.0, stddev=0.02, dtype=tf.float32, seed=None)

Initialization with random values from a normal truncated distribution.

The generated values follow a normal distribution with specified mean andstandard deviation, except that values whose magnitude is more than 2 standarddeviations from the mean are dropped and re-picked.

Arguments

  • shape: List of int. A shape to initialize a Tensor (optional).
  • mean: Same as dtype. The mean of the truncated normal distribution.
  • stddev: Same as dtype. The standard deviation of the truncatednormal distribution.
  • dtype: The tensor data type.
  • seed: int. Used to create a random seed for the distribution.

Returns

The Initializer, or an initialized Tensor if shape is specified.

tflearn.initializations.xavier (uniform=True, seed=None, dtype=tf.float32)

Returns an initializer performing "Xavier" initialization for weights.

This initializer is designed to keep the scale of the gradients roughly thesame in all layers. In uniform distribution this ends up being the range:x = sqrt(6. / (in + out)); [-x, x] and for normal distribution a standarddeviation of sqrt(3. / (in + out)) is used.

Arguments

  • uniform: Whether to use uniform or normal distributed randominitialization.
  • seed: A Python integer. Used to create random seeds. Seeset_random_seed for behavior.
  • dtype: The data type. Only floating point types are supported.

Returns

An initializer for a weight matrix.

References

Understanding the difficulty of training deep feedforward neuralnetworks. International conference on artificial intelligence andstatistics. Xavier Glorot and Yoshua Bengio (2010).

Links

http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf

tflearn.initializations.variance_scaling (factor=2.0, mode='FAN_IN', uniform=False, seed=None, dtype=tf.float32)

Returns an initializer that generates tensors without scaling variance.

When initializing a deep network, it is in principle advantageous to keepthe scale of the input variance constant, so it does not explode or diminishby reaching the final layer. This initializer use the following formula:

if mode='FAN_IN': # Count only number of input connections. n = fan_inelif mode='FAN_OUT': # Count only number of output connections. n = fan_outelif mode='FAN_AVG': # Average number of inputs and output connections. n = (fan_in + fan_out)/2.0 truncated_normal(shape, 0.0, stddev=sqrt(factor / n))

To get http://arxiv.org/pdf/1502.01852v1.pdf use (Default):- factor=2.0 mode='FAN_IN' uniform=False

To get http://arxiv.org/abs/1408.5093 use:- factor=1.0 mode='FAN_IN' uniform=True

To get http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf use:- factor=1.0 mode='FAN_AVG' uniform=True.

To get xavier_initializer use either:- factor=1.0 mode='FAN_AVG' uniform=True.- factor=1.0 mode='FAN_AVG' uniform=False.

Arguments

  • factor: Float. A multiplicative factor.
  • mode: String. 'FAN_IN', 'FAN_OUT', 'FAN_AVG'.
  • uniform: Whether to use uniform or normal distributed randominitialization.
  • seed: A Python integer. Used to create random seeds. Seeset_random_seed for behavior.
  • dtype: The data type. Only floating point types are supported.

Returns

An initializer that generates tensors with unit variance.

Initializations - TFLearn (2024)
Top Articles
Latest Posts
Article information

Author: Carlyn Walter

Last Updated:

Views: 6002

Rating: 5 / 5 (50 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Carlyn Walter

Birthday: 1996-01-03

Address: Suite 452 40815 Denyse Extensions, Sengermouth, OR 42374

Phone: +8501809515404

Job: Manufacturing Technician

Hobby: Table tennis, Archery, Vacation, Metal detecting, Yo-yoing, Crocheting, Creative writing

Introduction: My name is Carlyn Walter, I am a lively, glamorous, healthy, clean, powerful, calm, combative person who loves writing and wants to share my knowledge and understanding with you.