Is it correct to say the Neural Networks are an alternative way of performing Maximum Likelihood Estimation? if not, why? The 2019 Stack Overflow Developer Survey Results Are InCan we use MLE to estimate Neural Network weights?Are loss functions what define the identity of each supervised machine learning algorithm?What can we say about the likelihood function, besides using it in maximum likelihood estimation?Why is maximum likelihood estimation considered to be a frequentist techniqueMaximum Likelihood Estimation — why it is used despite being biased in many casesWhat is the objective of maximum likelihood estimation?Maximum Likelihood estimation and the Kalman filterWhy does Maximum Likelihood estimation maximizes probability density instead of probabilityWhy are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the errors are not normally distributed?the relationship between maximizing the likelihood and minimizing the cross-entropythe meaning of likelihood in maximum likelihood estimationHow to construct a cross-entropy loss for general regression targets?
Dropping list elements from nested list after evaluation
Output the Arecibo Message
Can there be female White Walkers?
How to support a colleague who finds meetings extremely tiring?
Falsification in Math vs Science
Is it ethical to upload a automatically generated paper to a non peer-reviewed site as part of a larger research?
Can a flute soloist sit?
Why couldn't they take pictures of a closer black hole?
Keeping a retro style to sci-fi spaceships?
Straighten subgroup lattice
Why isn't the circumferential light around the M87 black hole's event horizon symmetric?
Is it okay to consider publishing in my first year of PhD?
For what reasons would an animal species NOT cross a *horizontal* land bridge?
I am an eight letter word. What am I?
What is this business jet?
If a sorcerer casts the Banishment spell on a PC while in Avernus, does the PC return to their home plane?
How to charge AirPods to keep battery healthy?
What could be the right powersource for 15 seconds lifespan disposable giant chainsaw?
Loose spokes after only a few rides
APIPA and LAN Broadcast Domain
Are spiders unable to hurt humans, especially very small spiders?
What is the most efficient way to store a numeric range?
How can I define good in a religion that claims no moral authority?
Slides for 30 min~1 hr Skype tenure track application interview
Is it correct to say the Neural Networks are an alternative way of performing Maximum Likelihood Estimation? if not, why?
The 2019 Stack Overflow Developer Survey Results Are InCan we use MLE to estimate Neural Network weights?Are loss functions what define the identity of each supervised machine learning algorithm?What can we say about the likelihood function, besides using it in maximum likelihood estimation?Why is maximum likelihood estimation considered to be a frequentist techniqueMaximum Likelihood Estimation — why it is used despite being biased in many casesWhat is the objective of maximum likelihood estimation?Maximum Likelihood estimation and the Kalman filterWhy does Maximum Likelihood estimation maximizes probability density instead of probabilityWhy are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the errors are not normally distributed?the relationship between maximizing the likelihood and minimizing the cross-entropythe meaning of likelihood in maximum likelihood estimationHow to construct a cross-entropy loss for general regression targets?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
We often say that minimizing the (negative) cross-entropy error is the same as maximizing the likelihood. So can we say that NN are just an alternative way of performing Maximum Likelihood Estimation? if not, why?
neural-networks maximum-likelihood
New contributor
$endgroup$
add a comment |
$begingroup$
We often say that minimizing the (negative) cross-entropy error is the same as maximizing the likelihood. So can we say that NN are just an alternative way of performing Maximum Likelihood Estimation? if not, why?
neural-networks maximum-likelihood
New contributor
$endgroup$
1
$begingroup$
Possible duplicate of Can we use MLE to estimate Neural Network weights?
$endgroup$
– Sycorax
4 hours ago
add a comment |
$begingroup$
We often say that minimizing the (negative) cross-entropy error is the same as maximizing the likelihood. So can we say that NN are just an alternative way of performing Maximum Likelihood Estimation? if not, why?
neural-networks maximum-likelihood
New contributor
$endgroup$
We often say that minimizing the (negative) cross-entropy error is the same as maximizing the likelihood. So can we say that NN are just an alternative way of performing Maximum Likelihood Estimation? if not, why?
neural-networks maximum-likelihood
neural-networks maximum-likelihood
New contributor
New contributor
New contributor
asked 6 hours ago
aca06aca06
161
161
New contributor
New contributor
1
$begingroup$
Possible duplicate of Can we use MLE to estimate Neural Network weights?
$endgroup$
– Sycorax
4 hours ago
add a comment |
1
$begingroup$
Possible duplicate of Can we use MLE to estimate Neural Network weights?
$endgroup$
– Sycorax
4 hours ago
1
1
$begingroup$
Possible duplicate of Can we use MLE to estimate Neural Network weights?
$endgroup$
– Sycorax
4 hours ago
$begingroup$
Possible duplicate of Can we use MLE to estimate Neural Network weights?
$endgroup$
– Sycorax
4 hours ago
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
In abstract terms, neural networks are models, or if you prefer, functions with unknown parameters, where we try to learn the parameter by minimizing loss function (not just cross entropy, there are many other possibilities). In general, minimizing loss is in most cases equivalent to maximizing some likelihood function, but as discussed in this thread, it's not that simple.
You cannot say that they are equivalent, because minimizing loss, or maximizing likelihood is a method of finding the parameters, while neural network is the function defined in terms of those parameters.
$endgroup$
1
$begingroup$
I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
$endgroup$
– Sycorax
3 hours ago
1
$begingroup$
@Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
$endgroup$
– Tim♦
3 hours ago
1
$begingroup$
What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
$endgroup$
– aca06
3 hours ago
1
$begingroup$
@aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
$endgroup$
– Tim♦
3 hours ago
add a comment |
$begingroup$
These are fairly orthogonal topics.
Neural networks are a type of model which has a very large number of parameters. Maximum Likelihood Estimation is a very common method for estimating parameters from a given model and data. Typically, a model will allow you to compute a likelihood function from a model, data and parameter values. Since we don't know what the actual parameter values are, one way of estimating them is to use the value that maximizes the given likelihood. Neural networks are our model, maximum likelihood estimation is one method for estimating the parameters of our model.
One slightly technical note is that often, Maximum Likelihood Estimation is not exactly used in Neural Networks. That is, there are a lot of regularization methods used that imply we're not actually maximizing a likelihood function. These include:
(1) Penalized maximum likelihood. This one is a bit of a cop-out, as it doesn't actually take too much effort to think of Penalized likelihoods as actually just a different likelihood (i.e., one with priors) that one is maximizing.
(2) Random drop out. In especially a lot of the newer architectures, parameter values will randomly be set to 0 during training. This procedure is more definitely outside the realm of maximum likelihood estimation.
(3) Early stopping. It's not the most popular method at all, but one way to prevent overfitting is just to stop the optimization algorithm before it converges. Again, this is technically not maximum likelihood estimation, it's really just an ad-hoc solution to overfitting.
(4) Bayesian methods, probably the most common alternative to Maximum Likelihood Estimation in the statistics world, are also used for estimating the parameter values of a neural network. However, this is often too computationally intensive for large networks.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "65"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
aca06 is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f402511%2fis-it-correct-to-say-the-neural-networks-are-an-alternative-way-of-performing-ma%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
In abstract terms, neural networks are models, or if you prefer, functions with unknown parameters, where we try to learn the parameter by minimizing loss function (not just cross entropy, there are many other possibilities). In general, minimizing loss is in most cases equivalent to maximizing some likelihood function, but as discussed in this thread, it's not that simple.
You cannot say that they are equivalent, because minimizing loss, or maximizing likelihood is a method of finding the parameters, while neural network is the function defined in terms of those parameters.
$endgroup$
1
$begingroup$
I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
$endgroup$
– Sycorax
3 hours ago
1
$begingroup$
@Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
$endgroup$
– Tim♦
3 hours ago
1
$begingroup$
What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
$endgroup$
– aca06
3 hours ago
1
$begingroup$
@aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
$endgroup$
– Tim♦
3 hours ago
add a comment |
$begingroup$
In abstract terms, neural networks are models, or if you prefer, functions with unknown parameters, where we try to learn the parameter by minimizing loss function (not just cross entropy, there are many other possibilities). In general, minimizing loss is in most cases equivalent to maximizing some likelihood function, but as discussed in this thread, it's not that simple.
You cannot say that they are equivalent, because minimizing loss, or maximizing likelihood is a method of finding the parameters, while neural network is the function defined in terms of those parameters.
$endgroup$
1
$begingroup$
I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
$endgroup$
– Sycorax
3 hours ago
1
$begingroup$
@Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
$endgroup$
– Tim♦
3 hours ago
1
$begingroup$
What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
$endgroup$
– aca06
3 hours ago
1
$begingroup$
@aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
$endgroup$
– Tim♦
3 hours ago
add a comment |
$begingroup$
In abstract terms, neural networks are models, or if you prefer, functions with unknown parameters, where we try to learn the parameter by minimizing loss function (not just cross entropy, there are many other possibilities). In general, minimizing loss is in most cases equivalent to maximizing some likelihood function, but as discussed in this thread, it's not that simple.
You cannot say that they are equivalent, because minimizing loss, or maximizing likelihood is a method of finding the parameters, while neural network is the function defined in terms of those parameters.
$endgroup$
In abstract terms, neural networks are models, or if you prefer, functions with unknown parameters, where we try to learn the parameter by minimizing loss function (not just cross entropy, there are many other possibilities). In general, minimizing loss is in most cases equivalent to maximizing some likelihood function, but as discussed in this thread, it's not that simple.
You cannot say that they are equivalent, because minimizing loss, or maximizing likelihood is a method of finding the parameters, while neural network is the function defined in terms of those parameters.
answered 4 hours ago
Tim♦Tim
60k9133229
60k9133229
1
$begingroup$
I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
$endgroup$
– Sycorax
3 hours ago
1
$begingroup$
@Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
$endgroup$
– Tim♦
3 hours ago
1
$begingroup$
What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
$endgroup$
– aca06
3 hours ago
1
$begingroup$
@aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
$endgroup$
– Tim♦
3 hours ago
add a comment |
1
$begingroup$
I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
$endgroup$
– Sycorax
3 hours ago
1
$begingroup$
@Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
$endgroup$
– Tim♦
3 hours ago
1
$begingroup$
What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
$endgroup$
– aca06
3 hours ago
1
$begingroup$
@aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
$endgroup$
– Tim♦
3 hours ago
1
1
$begingroup$
I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
$endgroup$
– Sycorax
3 hours ago
$begingroup$
I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
$endgroup$
– Sycorax
3 hours ago
1
1
$begingroup$
@Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
$endgroup$
– Tim♦
3 hours ago
$begingroup$
@Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
$endgroup$
– Tim♦
3 hours ago
1
1
$begingroup$
What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
$endgroup$
– aca06
3 hours ago
$begingroup$
What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
$endgroup$
– aca06
3 hours ago
1
1
$begingroup$
@aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
$endgroup$
– Tim♦
3 hours ago
$begingroup$
@aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
$endgroup$
– Tim♦
3 hours ago
add a comment |
$begingroup$
These are fairly orthogonal topics.
Neural networks are a type of model which has a very large number of parameters. Maximum Likelihood Estimation is a very common method for estimating parameters from a given model and data. Typically, a model will allow you to compute a likelihood function from a model, data and parameter values. Since we don't know what the actual parameter values are, one way of estimating them is to use the value that maximizes the given likelihood. Neural networks are our model, maximum likelihood estimation is one method for estimating the parameters of our model.
One slightly technical note is that often, Maximum Likelihood Estimation is not exactly used in Neural Networks. That is, there are a lot of regularization methods used that imply we're not actually maximizing a likelihood function. These include:
(1) Penalized maximum likelihood. This one is a bit of a cop-out, as it doesn't actually take too much effort to think of Penalized likelihoods as actually just a different likelihood (i.e., one with priors) that one is maximizing.
(2) Random drop out. In especially a lot of the newer architectures, parameter values will randomly be set to 0 during training. This procedure is more definitely outside the realm of maximum likelihood estimation.
(3) Early stopping. It's not the most popular method at all, but one way to prevent overfitting is just to stop the optimization algorithm before it converges. Again, this is technically not maximum likelihood estimation, it's really just an ad-hoc solution to overfitting.
(4) Bayesian methods, probably the most common alternative to Maximum Likelihood Estimation in the statistics world, are also used for estimating the parameter values of a neural network. However, this is often too computationally intensive for large networks.
$endgroup$
add a comment |
$begingroup$
These are fairly orthogonal topics.
Neural networks are a type of model which has a very large number of parameters. Maximum Likelihood Estimation is a very common method for estimating parameters from a given model and data. Typically, a model will allow you to compute a likelihood function from a model, data and parameter values. Since we don't know what the actual parameter values are, one way of estimating them is to use the value that maximizes the given likelihood. Neural networks are our model, maximum likelihood estimation is one method for estimating the parameters of our model.
One slightly technical note is that often, Maximum Likelihood Estimation is not exactly used in Neural Networks. That is, there are a lot of regularization methods used that imply we're not actually maximizing a likelihood function. These include:
(1) Penalized maximum likelihood. This one is a bit of a cop-out, as it doesn't actually take too much effort to think of Penalized likelihoods as actually just a different likelihood (i.e., one with priors) that one is maximizing.
(2) Random drop out. In especially a lot of the newer architectures, parameter values will randomly be set to 0 during training. This procedure is more definitely outside the realm of maximum likelihood estimation.
(3) Early stopping. It's not the most popular method at all, but one way to prevent overfitting is just to stop the optimization algorithm before it converges. Again, this is technically not maximum likelihood estimation, it's really just an ad-hoc solution to overfitting.
(4) Bayesian methods, probably the most common alternative to Maximum Likelihood Estimation in the statistics world, are also used for estimating the parameter values of a neural network. However, this is often too computationally intensive for large networks.
$endgroup$
add a comment |
$begingroup$
These are fairly orthogonal topics.
Neural networks are a type of model which has a very large number of parameters. Maximum Likelihood Estimation is a very common method for estimating parameters from a given model and data. Typically, a model will allow you to compute a likelihood function from a model, data and parameter values. Since we don't know what the actual parameter values are, one way of estimating them is to use the value that maximizes the given likelihood. Neural networks are our model, maximum likelihood estimation is one method for estimating the parameters of our model.
One slightly technical note is that often, Maximum Likelihood Estimation is not exactly used in Neural Networks. That is, there are a lot of regularization methods used that imply we're not actually maximizing a likelihood function. These include:
(1) Penalized maximum likelihood. This one is a bit of a cop-out, as it doesn't actually take too much effort to think of Penalized likelihoods as actually just a different likelihood (i.e., one with priors) that one is maximizing.
(2) Random drop out. In especially a lot of the newer architectures, parameter values will randomly be set to 0 during training. This procedure is more definitely outside the realm of maximum likelihood estimation.
(3) Early stopping. It's not the most popular method at all, but one way to prevent overfitting is just to stop the optimization algorithm before it converges. Again, this is technically not maximum likelihood estimation, it's really just an ad-hoc solution to overfitting.
(4) Bayesian methods, probably the most common alternative to Maximum Likelihood Estimation in the statistics world, are also used for estimating the parameter values of a neural network. However, this is often too computationally intensive for large networks.
$endgroup$
These are fairly orthogonal topics.
Neural networks are a type of model which has a very large number of parameters. Maximum Likelihood Estimation is a very common method for estimating parameters from a given model and data. Typically, a model will allow you to compute a likelihood function from a model, data and parameter values. Since we don't know what the actual parameter values are, one way of estimating them is to use the value that maximizes the given likelihood. Neural networks are our model, maximum likelihood estimation is one method for estimating the parameters of our model.
One slightly technical note is that often, Maximum Likelihood Estimation is not exactly used in Neural Networks. That is, there are a lot of regularization methods used that imply we're not actually maximizing a likelihood function. These include:
(1) Penalized maximum likelihood. This one is a bit of a cop-out, as it doesn't actually take too much effort to think of Penalized likelihoods as actually just a different likelihood (i.e., one with priors) that one is maximizing.
(2) Random drop out. In especially a lot of the newer architectures, parameter values will randomly be set to 0 during training. This procedure is more definitely outside the realm of maximum likelihood estimation.
(3) Early stopping. It's not the most popular method at all, but one way to prevent overfitting is just to stop the optimization algorithm before it converges. Again, this is technically not maximum likelihood estimation, it's really just an ad-hoc solution to overfitting.
(4) Bayesian methods, probably the most common alternative to Maximum Likelihood Estimation in the statistics world, are also used for estimating the parameter values of a neural network. However, this is often too computationally intensive for large networks.
edited 1 hour ago
answered 1 hour ago
Cliff ABCliff AB
13.8k12567
13.8k12567
add a comment |
add a comment |
aca06 is a new contributor. Be nice, and check out our Code of Conduct.
aca06 is a new contributor. Be nice, and check out our Code of Conduct.
aca06 is a new contributor. Be nice, and check out our Code of Conduct.
aca06 is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f402511%2fis-it-correct-to-say-the-neural-networks-are-an-alternative-way-of-performing-ma%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
Possible duplicate of Can we use MLE to estimate Neural Network weights?
$endgroup$
– Sycorax
4 hours ago