Early Warning Signalling with Confidence
During my PhD (and also after) I worked with
artificial neural networks. I considered that the
weights in these networks should not be based upon
probabilities only, they should be made into distributions
so you could also tell how much you believed in something
according to the Bayesian view.
I was not so very good in statistics at that time but got some
help from a good friend in our group to set up some approximations
for the weight distributions.
At that time a great guy from Panama had started discussions
about applying our neural networks methods onto the pharmacovigilance
area. This became a very successful cooperation and the
method, based upon something called information components,
is now becoming a standard within pharmacovigilance.
The method has been used as a standard method since 1998.
I don't want to consider me as the inventor here. This was
a team effort where each part provided essential ideas and
with combined effort we came up with a working solution.
(one geek is good, but with plenty of them you can do
wonders).
Link to paper:
Bayesian neural networks with confidence estimations applied
to data mining
Problems:
-
Even if the method is great it is unfortunately not
possible to publish on the web due to the copyright
rules.
-
In this kind of publications there is a tremendous
time lag. The paper was originally submitted in
Sept 1998 and was finally published in Oct 2000.
-
The paper is published in a magazine with an annual
subscription fee of around 250 USD. Not even our
library can afford this magazine now.
-
Even though research results shold be free for all
they are in practice locked in.
-
It is not possible to store this kind of knowledge
in a big data base and for instance perform data mining
on, due to the "looking in". To purchase all papers
that would be needed would cost a tremendous amount.
-
Formats for this kind of information exchange are not
standardized which makes it very hard to do anything
meaningful of them. The formats which are used
today are almost completely lacking structure. I use
TeX, some use Word, but these are both almost
useless for information retriveal viewpoint.
Once SGML was used which allowed HTML to be created.
HTML is just another unstructured mess. XML is a subset
of SGML but XML is less general than SGML, why go backwards?
-
If I write a similar paper and did not send it to the
publisher, I could then publish it on the web, but how
could you then know that what I've written wouldn't
be crap, as it would not be peer reviewed.
aimnonutopia.org.
Last modified: Tue Mar 22 06:47:36 CEST 2002