Human Search Engine
"The collector is the true resident of the interior.....The
collector dreams his way not only into a distant or bygone world,
but also into a better one
is a difficult journey towards a
, often symbolic, abstract in
What is a Human Search Engine?
Person or persons searching
multiple search engines
multiple keywords and phrases, searching multiple websites for
links and information. Also using other media such as TV,
Movies, Documentaries, Radio, Magazines, Newspapers, Advertisers
from other people
. Doing all this to find
relevant information and websites pertaining to a particular
subject, with this subject being
, and the never ending process of finding ways to
the public is informed
realities of our lives and our current situation
that blends the
top search results.
is the application of data mining techniques to discover
patterns from the World Wide Web. Web mining can be divided into three
different types – Web usage
mining, Web content mining and Web
mining. Search Engine Types
is a software application that runs
automated tasks (scripts) over the Internet.
form and gather
separate units into a
mass or whole
I am a Human Search Engine, but it's much
more then that...
is an information professional who assesses, collects,
organizes, preserves, maintains control over, and provides access to
records and archives determined to have long-term value.
I'm an Internet Miner
exploring the World Wide Web. An
of Information and Knowledge.
Extracting and aggregating the best
that the Internet and the world has to offer.
Internet One Website at a time. I'm a Knowledge Moderator, an
it's more then that
an accumulator of knowledge who seeks to pass knowledge on to
a specially designed web site that brings information together
from diverse sources in a uniform way.
Extract, Transform, Load
is a process in database usage and
especially in data warehousing that Extracts
data from homogeneous or heterogeneous data sources. Transforms
the data for storing it in the proper format or structure for
the purposes of
and analysis. Loads
it into the final target, (database
more specifically, operational data store, data mart, or data
of the System since 2008.
Welcome to my Journey in
Heaven. Over 18 years of internet searches that
are organized, categorized and contextualized. A
Dream. I have already clicked my
over a million times, and I've only just begun.
I have tracked over 90% of my online
activities since 1998, so my
is a long one. This
is my story about one mans journey
through the Internet.
What if you shared everything you learned?
Did you ever wonder?
To put it
simply, "I'm Organizing the Internet"
. Over the last 18
years since 1998, I have been surfing the world wide web, or
the internet, and
my experience. I've asked
the internet well over 500,000 Questions so far. And from those
questions I have gathered a lot of Information, Knowledge and
Resources. So I then organized this Information, Knowledge and
Resources into categories
. I then published it on my website so that
the Information, Knowledge and Resources can be shared and used
for educational purposes. I also share what I've personally
learned from this incredible endless journey that I have taken
through the internet. The internet is like the universe, I'm not
over whelmed by the
Size of the Internet
, I'm just
amazed from all the things that I have learned, and wondering
just how much more will I be able to understand. Does knowledge
and information have a limit? Well lets find out.
me has always been about discovering limits, this is just
. I'm an internet
surfer who has been riding the perfect wave for over 12 years.
But this is nothing new. In the early 1900's,
pursued his quest to organize the world’s
envisioned the internet before modern
computers were being used.
MundaneumOrganizing Wiki Pages
Art and Science of Curation
is a World Encyclopaedia that could help world citizens make the
best use of universal information resources and make the best contribution
to world peace.
is usually hierarchical and contains all the relevant entities
Ontology (information science
) types, properties, and
interrelationships of the entities that really or fundamentally
exist for a particular domain of discourse.
Human Search Engine
A human search engine
is not manipulated by money or manipulated by defective and
A human search engine is created by humans
, for humans. We don't
have everything, but who needs everything? People want what's
important. People want the most valuable knowledge and
information that is available, without stupid adds, and without
any ignorant manipulation or
. People want a trusted source for information, a
source that cares about people more then money. A
indexed by human eyes using
Internet Pathfinder whose task it is to carry out
Daily Internet Reconnaissance Missions
and document my
findings. No, I'm not an
but I have created an excellent
Our physical journeys in the world are just as important as our
Mental Explorations in the Mind, the Discoveries are Endless.
These days I seem to be leaving more
then actual footprints. Which one
is more meaningful?
is to open up and explore a new area. Take the lead or initiative in.
Participate in the development of.
Leading the way. Trailblazing. To initiate or participate in the
is to extract (information) from
various sources. Gather, as of natural products. Accumulate
I'm more of a Knowledge
Organizer and Knowledge Sharer then a
I am just a bee in the hive of Knowledge, doing my part to keep the hive
is an enclosed structure in which some honey bee
species of the subgenus Apis live and raise their young.
The Hive FCV
"I wouldn't say that I'm
a wisdom keeper, I more of a wisdom sharer, which makes everyone a
“For every minute spent in organizing, an hour is earned
I feel like a
, a passage (a pipe or tunnel) a channel for
transferring information, synchronizing information to and from various
Projects are the work accumulated from one Human Editor -
The Power of One
Looking for Adventure.com
60,000 handpicked Websites. (External Links)
14 years to accumulate as of 2016.Basic
50,000 handpicked Websites. (External Links) (took 8 years
to accumulate as of 2016)
The Internet and Computer Digital
Information combined allows a person to save the work that they have done
and create a living record of information and experiences.
Looking for Adventure.com
' "not a total copy of my
life but getting close" Things don't have to be
written in stone
anymore, but it doesn't hurt to have an
When I started
in 1998 I didn't know how much knowledge and information I would find
or did I know what kind of knowledge and information I would
find, or did I know what kind of benefits would come from this
knowledge and information. Like a miner in the olds days, you
dig a little each day and see what you get. And wouldn't you
know it, I hit the jackpot. The wealth of information and
knowledge that there is in the world is enormous, and
invaluable. But we can't celebrate just yet, we still
need to distribute our wealth of knowledge and information and
give everyone access. Other wise we will never fully benefit
from our wealth of knowledge and information, or we will ever
fully benefit from the enormous potential that it will give us.
"I saw a huge unexplored
ocean, so naturally I dove in to take a look. 8 years later in
2016, I have been exploring this endless sea of knowledge, and
have come to realize that I have found a home."
About my Research
Information Filtering System
is a system that removes
redundant or unwanted information from an information stream using (semi)
automated or computerized methods prior to presentation to a human user.
Its main goal is the management of the
increment of the semantic signal-to-noise ratio. To do this the user's
profile is compared to some reference characteristics. These
characteristics may originate from the information item (the content-based
approach) or the user's social environment (the collaborative filtering
approach). Filtering is not the same as
is a device that
removes something from whatever passes through it. A porous device for
removing impurities or solid particles from a liquid or gas passed through
is something full of pores or
vessels or holes allowing passage in and out.
Filter (signal processing)
a device or process that removes some unwanted components or
features from a signal. Filtering is a class of signal
processing, the defining feature of filters being the complete
or partial suppression of some aspect of the signal.
is a conceptual process by which general rules and concepts are derived
from the usage and classification
of specific examples. Conceptual
abstractions may be formed by filtering the
content of a
concept or an observable phenomenon, selecting only the aspects which are
for a particular
is a subtask of information extraction. The goal of
terminology extraction is to automatically extract relevant terms from a
given corpus. Collect a vocabulary of domain-relevant terms, constituting
the linguistic surface manifestation of domain concepts.
Noisy Text Analytics
is a process of information extraction
whose goal is to automatically extract
or semistructured information from
unstructured text data.
noise can be seen as all the differences between the surface
form of a coded
representation of the text and the intended,
correct, or original text.
is the process
through which information is filtered for
, whether for publication, broadcasting, the Internet, or
some other mode of communication
individuals who decide whether a given message will be distributed by a
mass medium. Serve in various roles including academic admissions,
financial advising, and news editing.
Not to be confused with mass media
is the process of filtering for information or
using techniques involving
among multiple agents,
viewpoints, data sources, etc. Sometimes making automatic
about the interests of a user by collecting preferences or taste
information from many users (collaborating).
Deep Packet Inspection
a form of computer network packet filtering that examines the data part
(and possibly also the header) of a packet as it passes an inspection
point, searching for protocol non-compliance, viruses, spam, intrusions,
or defined criteria to decide whether the packet may pass or if it needs
to be routed to a different destination, or, for the purpose of collecting
statistical information that functions at the Application layer of the OSI
(Open Systems Interconnection model).
So how does one person create databases this large in such a short time?
The techniques and methods are quite simple.....
When doing internet searches, for what ever
reason, you are bound to come across a website or keyword phrase
that relates to your subject matter. Then you do more searches
using those keywords and then save those keywords and websites
to your database. This is very important because most likely you
will never come across the same info related to those particular
search parameters, so saving and documenting your findings is
very important. Terminology Extraction
When reading, watching TV, watching a movie or even
talking with someone, you are bound to come across ideas and
keywords that you could use when searching for more information
pertaining to your subject. Then again saving and documenting
your findings is very important. It's always a good idea to have
a pen and paper handy to write things down or you can use your
to record a voice memo so that you don't forget your
information or ideas. The main thing is to have a subject that
you're interested in and at the same time being aware of what
information is valuable to your subject when it finally presents
itself. Combining a
: Organizing, updating and improving your database
so that it stays functional and easy to access. So my time is
usually balanced between these three tasks, and yes it is time
consuming. You can also use the
Big 6 Techniques
Information to help with your efficiency and effectiveness.
also created a
Internet Searching Tips help Section
for useful ideas.
List of Glossaries
One last thing:
if you spend a lot of time on the internet
doing searches and looking for answers you are bound to come
across some really useful websites and information that were not
relevant to what you were originally searching for. So it's a
good idea to start saving these useful websites in new
categories or just save them in a appropriate named folder in
your documents. This way you can share these websites with
friends or just use them at some later time.
It is sometimes called Creating Search
Trails, which I have 18 years worth as of 2016. Not bad for a
Personal Web Page
The Deep Web
consists of those pages that Google and other
search engines don't index.
The Dark Web
is an actively hidden, often anonymous part of the
deep web but it isn't inherently bad.
Deep Web Exploring
, the part of the internet that very little
people have ever seen. Memex
How the Mysterious Dark Net is going Mainstream
Google has indexed 1 trillion pages so far in 2016, but that is
only 5% of the total knowledge and information that we have.
, also called the Visible Web, Clearnet, Indexed
Web, Indexable Web or Lightnet. It is that portion of the World
Wide Web that is readily available to the general public and
searchable with standard web search engines. It is the opposite
of the deep web.
In a way my Human Search Engine
Basic Knowledge 101
the importance of a
Human Operating System
regards to having a more Comprehensive and
Effective Education. This is my
....My Education Knowledge Database Project...This
is just the beginning of my
Basic Knowledge 101.com is my
. Working on this project I went from an
right into a
I started out as a Non-Degree Seeking Student but I ended up
, well almost.
I have done my fieldwork, I have acquired specialized skills, I
have done advanced original
. My Business Card
. But I still have no name for my
Advanced Academic Degree. Maybe "Internet Comprehension 101
Is anyone actually studying the
In some ways they are.
. I wonder what they're learning?
There is also Web Science, which is not the same as
Web of Science
Free Open Access
. I wonder who else is studying these
subjects in this particular way besides me? For now I am just a
who is working on a
that 90% of people can not comprehend.
So I guess that makes me a kind of a
Internet Education Knowledge
The Information Age
We are now living in the
A time where information and knowledge is so abundant that we can no
longer ignore it. But sadly, not everyone understands what information is,
or do most people understand the potential of Knowledge and Information.
The Information age is the greatest transition of the human race, and of
our planet. The power of knowledge is just beginning to be realized.
Knowledge and information gives us an incredible ability to explore
ourselves, our world and our universe in ways that we have never imagined.
Knowledge and information can improve the lives of every man, women and
child on this planet. Knowledge and information will also help us
understand the importance of all life forms on this planet like never
before. This is truly the Greatest Awakening of our world.
is a mechanism for distributing knowledge resources.
Knowledge ManagementInformation Literacy
What have I Learned about being a Human Search Engine?
I am a
as well as a Human Search Engine. Humans will
always be better then machines when it comes to associations,
, something's need to be done manually. Especially
when it comes to organizing information and knowledge. Linking
Library and information science
, creating a
is what I have been doing for 10 years. " Welcome
to Web 3.0." I'm an
, because there are just some things that
do or do well. Automated
can only do so much. So we need more
Creating knowledge bases is absolutely essential. This is why I
believe that having more Human Search Engines is a benefit to
anyone seeking knowledge and information. (Human
Based Genetic Algorithms
) Structuring websites into syntax
link patterns and information into categories or
without being objective or impartial.
and websites so that visitors have an
easy time finding what they're looking for (Principle
of least effort
), plus at the same time, showing them other
things that are related to that particular subject that might
also be of interest to them. (Abstraction
). More relevant choices and a great alternative and
. But it's not
easy to manage and maintain a human search engine, especially
for one person. You're constantly updating the link data base,
adding links, replacing links or removing some links altogether.
Then on top of that there's the organizing and the adding of
content, photos and video. And all the while your website grows
and grows. Adding related subjects and subcategorizing
information and links. Cross linking or
related information can be found in more then one place while at
the same time displaying more
Semantic Web Info
What being a Human Search Engine Represents
A Human Search Engine is more then just a
, and it's more then just
an Information Hub or
A Human Search Engine
is also more then just
is a branch of
(LIS) concerned with activities such as document
description, indexing and
performed in libraries,
databases, archives, etc..
is a method by which a country
gathers information using non-governmental employees.
refers to a web site or computer
a specific type of information from multiple online
is the creation of knowledge from
structured (relational databases, XML) and unstructured (text, documents,
images) sources. The resulting knowledge needs to be in a
and machine-interpretable format and must represent knowledge in a manner
that facilitates inferencing
Information Filtering System
is a file system cataloging structure which
contains references to other computer files, and possibly other
directories. On many computers,
are known as folders, or
drawers to provide some relevancy to a workbench or the traditional office
is a directory on the World Wide Web. A
collection of data organized
. It specializes in linking to other web sites and
categorizing those links.
- Types of Books
refers to various methods for indexing the
contents of a website or of the Internet as a whole. Individual websites
or intranets may use a back-of-the-book index, while search engines
usually use keywords and metadata to provide a more useful vocabulary for
Internet or onsite searching. With the increase in the number of
periodicals that have articles online, web indexing is also becoming
important for periodical websites.
is an extension of the Web through standards by the World
Wide Web Consortium (W3C). The standards promote common data formats and
exchange protocols on the Web, most fundamentally the Resource Description
Organic Search Engine
is a search engine that uses human
participation to filter the search results
and assist users in clarifying
their search request. The goal is to provide users with a limited number
of relevant results, as opposed to traditional search engines that often
return a large number of results that may or may not be relevant.
is a method for entering one or a plurality of search
items in a single data string into a search engine. Organic search results
are listings on search engine results pages that appear because of their
relevance to the search terms, as opposed to their being advertisements.
In contrast, non-organic search results may include pay per click
Hybrid Search Engine
is a type of computer search engine
that uses different types of data with or without ontologies to produce
the algorithmically generated results based on web crawling. Previous
types of search engines only use text to generate their results. Hybrid
search engines use a combination of both crawler-based results and
directory results. More and more search engines these days are moving to a
hybrid-based model. Question and
Search Engine (computing)
is an information retrieval system designed
to help find information stored on a computer system. The search results
are usually presented in a list and are commonly called hits. Search
engines help to minimize the time required to find information and the
amount of information which must be consulted, akin to other techniques
for managing information overload. The most public, visible form of a
search engine is a Web search engine which searches for information on the
World Wide Web.
is the ability to
a name, reference, or container instead of the value itself. The most
common form of indirection is the act of manipulating a value through its
memory address. For example, accessing a variable through the use of a
pointer. A stored pointer that exists to provide a reference to an object
by double indirection is called an
indirection node. In some older
computer architectures, indirect words supported a variety of more-or-less
complicated addressing modes.
Probabilistic Relevance Model
is a formalism of information retrieval
useful to derive ranking functions used by search engines and web search
engines in order to rank matching documents according to their relevance
to a given search query. It makes an estimation of the probability of
finding if a document dj is relevant to a query q. This model assumes that
this probability of relevance depends on the query and document
representations. Furthermore, it assumes that there is a portion of all
documents that is preferred by the user as the answer set for query q.
Such an ideal answer set is called R and should maximize the overall
probability of relevance to that user. The prediction is that documents in
this set R are relevant to the query, while documents not present in the
set are non-relevant.
is a probabilistic graphical model (a type of statistical model) that
represents a set of random
and their conditional dependencies via a directed acyclic graph (DAG). For
example, a Bayesian network could represent the probabilistic
relationships between diseases and symptoms. Given symptoms, the network
can be used to compute the probabilities of the presence of various
is a method of statistical inference in which Bayes' theorem
is used to update the probability
for a hypothesis as more evidence or
information becomes available. Bayesian inference is an important
technique in statistics, and especially in mathematical statistics.
Bayesian updating is particularly important in the dynamic analysis of a
sequence of data. Bayesian inference has found application in a wide range
of activities, including science, engineering, philosophy, medicine,
sport, and law. In the philosophy of decision theory, Bayesian inference
is closely related to subjective probability, often called "Bayesian
is a type of metasearch engine which gathers results from
, typically through RSS search results. It combines
user specified search feeds (parameterized RSS feeds which return search
results) to give the user the same level of control over
, or a person who
or aggregator) is a search tool that uses another
data to produce their
own results from the Internet. Metasearch engines take input from a user
and simultaneously send out queries to third party search engines for
results. Sufficient data is gathered, formatted by their ranks and
presented to the users.
is a method of searching on the Internet where the query is given first
and the information for the results are then acquired. This differs from
traditional, or "retrospective", search such as search engines, where the
information for the results is acquired and then queried.
is the act of describing or classifying a document by index terms or other
symbols in order to indicate what the document is about, to summarize its
content or to increase its findability. In other words, it is about
identifying and describing the subject of documents. Indexes are
constructed, separately, on three distinct levels: terms in a document
such as a book; objects in a collection such as a library; and documents
(such as books and articles) within a field of knowledge.
Search Engine Indexing
collects, parses, and stores data to facilitate
fast and accurate information retrieval. Index design incorporates
interdisciplinary concepts from linguistics, cognitive psychology,
mathematics, informatics, and computer science. An alternate name for the
process in the context of search engines designed to find web pages on the
Internet is web indexing.
also referred to as text data mining, roughly equivalent to
text analytics, refers to the process of deriving high-quality information
from text. High-quality information is typically derived through the
devising of patterns and trends through means such as statistical pattern
learning. Text mining usually involves the process of structuring the
input text (usually parsing, along with the addition of some derived
linguistic features and the removal of others, and subsequent insertion
into a database), deriving
within the structured data, and finally evaluation and interpretation of
the output. 'High quality' in text mining usually refers to some
combination of relevance, novelty, and interestingness. Typical text
mining tasks include text categorization, text clustering, concept/entity
extraction, production of granular taxonomies, sentiment analysis,
document summarization, and entity relation modeling (i.e., learning
relations between named entities). Text analysis involves information
retrieval, lexical analysis to study word frequency distributions, pattern
recognition, tagging/annotation, information extraction, data mining
techniques including link and association analysis, visualization, and
predictive analytics. The overarching goal is, essentially, to turn text
into data for analysis, via application of natural language processing
(NLP) and analytical methods. A typical application is to scan a set of
documents written in a natural language and either model the document set
for predictive classification purposes or populate a database or search
index with the information extracted.
is a behavior of retrieving and searching on a
social searching engine
that mainly searches user-generated content such as news, videos and
images related search queries on social media like Facebook, Twitter,
Instagram and Flickr. It is an enhanced version of web search that
combines traditional algorithms. The idea behind social search is that
instead of a machine deciding which pages should be returned for a
specific query based upon an
, results that are based on the human network of
the searcher might be more relevant to that specific user's needs.
Interactive Person to Person Search Engine
Gimmeyit (search engine)
is a crowd-source-based search engine using
social media content to find relevant search results rather than the
traditional rank-based search engines that rely on routine cataloging and
indexing of website data. The crowd-source approach scans social media
sources in real-rime to find results based on current social "buzz" rather
than proprietary ranking algorithms being run against indexed sites. With
a crowd source approach, no websites are indexed and no storage of website
metadata is maintained.
- Public Data
is a search engine system in which the user invokes a search query using
only the mouse. A selection-based search system allows the user to search
the internet for more information about any keyword or phrase contained
within a document or webpage in any software application on his desktop
computer using the mouse.
Web Searching Tips
is most often a specially designed web site that
brings information together from diverse sources in a uniform way.
Usually, each information source gets its dedicated area on the page for
displaying information (a portlet); often, the user can configure which
ones to display.
is a networking device that forwards data packets between computer
networks. Routers perform the traffic directing functions on the Internet.
A data packet is typically forwarded from one router to another through
the networks that constitute the internetwork until it reaches its
- Web of Life
Window to the World
A Human Search Engine also includes..
is a computer science technique in which
machine performs its function
by outsourcing certain steps to humans,
usually as microwork. This approach uses differences in abilities and
alternative costs between humans and computer agents to achieve
. In traditional computation, a human employs a computer to
solve a problem; a human provides a formalized problem description and an
to a computer, and receives a solution to interpret. Human-based
computation frequently reverses the roles; the computer asks a person or a
large group of people to solve a problem, then collects, interprets, and
integrates their solutions.Reflective Practice
Open to the Public
knowledge released in such a way that users
are free to read, listen to, watch, or otherwise experience it; to learn
from or with it; to copy, adapt and use it for any purpose; and to share
the work (unchanged or modified).
refers to information, data, and content
that is collectively owned and managed by a community of users,
particularly over the Internet. What distinguishes a knowledge commons
from a commons of shared physical resources is that digital resources are
non-subtractible; that is, multiple users can
access the same digital
with no effect on their quantity or quality.
is knowledge that one is free to use,
reuse, and redistribute without legal, social or technological
restriction. Open knowledge is a set of principles and methodologies
related to the production and distribution of knowledge works in an open
manner. Knowledge is interpreted broadly to include data, content and
Open Knowledge Initiative
is an organization responsible for the
specification of software interfaces comprising a Service Oriented
Architecture (SOA) based on high level service definitions.
Open Access Publishing
refers to online research outputs that are free
of all restrictions on access (e.g. access tolls) and free of many
restrictions on use (e.g. certain copyright and license restrictions)
is the idea that some data should be
available to everyone
to use and republish as they wish, without
restrictions from copyright, patents or other mechanisms of control.
describes a creative work that others can copy
A Human Search Engine is a lot of work
I have been working an average of
20 Hours a week since 1998 and over 50 Hours a week since 2006.
With over a
over 450 billion web pages on the
World Wide Web
, there's a lot of information to be
organized. And with almost 2 billion people on the internet
there's a lot of minds to collaborate with.
My Human Search Engine
are always improving, but I'm definitely not
so there is always more to learn. I'm constantly
so I do make mistakes from time to time,
especially with proof reading my own writing, which seems almost
). This is why writers and authors have proof
readers and copy editors, which is something I cannot afford
right now, so please excuse me for my spelling errors and poor
grammar. Besides that I'm still making progress and I'm always
acquiring new knowledge, which always makes these projects
fascinating and never boring.
The Adventures in Learning
You can also look at my website as web Indexing.
means creating indexes for individual Web sites, intranets,
collections of HTML documents, or even collections of Web sites.
are systematically arranged items, such as topics
or names, that serve as entry points to go directly to desired
information within a larger document or set of documents.
Indexes are traditionally alphabetically arranged. But they may
also make use of
, as provided by thesauri, or they
may be entirely hierarchical, as in the case of taxonomies. An
index might not even be displayed, if it incorporated into a
is an analytic process of
determining which concepts are worth indexing, what entry labels
to use, and how to arrange the entries. As such, Web indexing is
best done by individuals skilled in the craft of indexing,
either through formal training or through self-taught reading
is a list of words or phrases ('headings') and
associated pointers ('locators') to where useful material relating to that
heading can be found in a document or collection of documents. Examples
are an index in the back matter of a book and an index that serves as a
A Web index
is often a browsable list of
entries from which the user makes selections, but it may be
non-displayed and searched by the user typing into a search box.
A site A-Z index is a kind of Web index that resembles an
alphabetical back-of-the-book style index, where the index
entries are hyperlinked directly to the appropriate Web page or
page section, rather than using page numbers.
is a facility for creating links to the many
wikis on the World Wide Web. Users avoid pasting in entire URLs (as they
would for regular web pages) and instead use a shorthand similar to links
within the same wiki (intrawiki links).
I'm like an isle in the internet
. Organizing data out of necessity while making it a
value to others at the same time. Eventually connecting to other
human search engines around the world to expand its reach and
I like to describe my website
as being kind of like a lateral
then the usual
because I update
multiple pages at once instead of just one. As of 2010 around
120,000 new weblogs are being created worldwide each day, but of
the 70 million weblogs that have been created only around 15.5
million are actually active. Though blogs and User-Generated Content
are useful to some extent I feel that
too much time and effort is wasted, especially if the
information and knowledge that is gained from a blog is not
organized and categorized in a way that readers can utilize and
access these archives like they would do with newspapers. This
way someone can build knowledge based evidence and facts to use
against corruption and incompetence. This
would probably take a
for all the
blogs to submit too. This way useful knowledge and information
is not lost in a sea of confusion. This is one of the reasons
why this websites information and links will continue to be
organized and updated so that the website continues to improve.
in a Chain
"There's a lot you don't know, welcome to web 3.0" This
is not just my version of the internet, this is my vision of the
internet. And this is not philosophy, it's just the best idea that I have so far until I can
find something better to add to it, or replace it, or change it. A Think Tank
who's only major influence is
"When an old man dies, it's like entire library burning down to the ground. But not for
me, I'll just back it up on the internet."
Internet Searching Tips
"Knowing how to ask a question and knowing how to analyze the answers"
If on a website and you're using the
Firefox browser, if you right click on the page, and then click
on "Save Page As", it will save the entire page on your computer
so that can be view that page when you are off line, without the
need of an internet connection.
When searching the
you have to use more then one
to do a complete search. Using one search engine will narrow
your findings and possibly keep you from finding what you're
looking for because most search engines are not perfect and are
sometimes unorganized, flawed and manipulated. This is why I'm
organizing the Internet because search engines are flawed and
thus cannot be fully depended on for accuracy.
Using the same exact keywords on 4 different
search engines I found the website that I was looking for at the
top in the number one position, on 2 of the 4 search engines,
and I could not find that same website on the other search
engine unless I searched several pages deep. So one search
engine is flawed or manipulated and the other search engine is
not. There are chances that the webpage you are looking for is
not titled correctly so you may have to use different keywords
or phrases in order to find it. But even then this is no
guarantee because search engines also use other factors when
calculating the results for particular words or phrases. And
what all those other factors are and how they work is not
Search engines are in fact a highly important
, just like a
, except not corrupted of course. If you honestly can not say exactly how and why you performed a
particular action, then how the hell are people supposed to
believe you or understand what they need to do in order to fix
your mistake or at least confirm there was no mistake?
and knowing the
for these particular services are absolutely
necessary. People have the right not to be part of a
. These Systems need to be
in order for us to work accurately and efficiently.
results, while at the same time they kill small businesses, and
not only that, they influence other people to censor information
and corrupt the system. Why do corporations get greedy and
criminal? And why do they cause others to repeat this
is a cancer in the wrong hands.
Problems with Google
, Google's instant autocomplete that automatically fills
in words and phrases with search predictions and suggestions. Sometimes
with disturbing results.
, works OK most of the time, but it is also used to
at its worst.
Search Engine Failures
Human Search Engine
"If you are indexing information, that should be your focus.
If information is judged on irrelevant factors, then you will
fail to correctly distribute information, which will make
certain information in search results unreliable, illogical and
In the mean time
when searching the Internet, going
several pages deep on search engines will also help find
information because the first 10 choices are sometimes
irrelevant. I have sometimes found things that I'm looking for
30 pages deep. You will also find different key words, phrases
and characters within the search results that may also help
increase your odds of finding what you're looking for. Sometimes
checking a websites links on their resources page may also help
you find websites that are not listed correctly in search
engines. Web Searching for Information needs to be a
Human Search Engine Tips
Most search engines like Google have
Advanced Searching Tools
found on the side or at the bottom
of their search pages.
Knowing where to
type in certain characters in your search phrases also helps you
find what you're looking for.
If you want to limit your searches on
to only education websites or government websites
then type in "site:edu" or "site:gov" after your key word or
For example Teaching Mathematical Concepts site:edu
For searching a specific website type in "neutrino
site:harvard.edu after the word or search phrase.
To narrow your searches to file types like PowerPoint, excel or
pdf's then type in filetype:ppt after the word.
For search ranges use 2 periods between 2 numbers, like "Wii
Using quotes or a +
within your search
phrases. Example, imagine you want to find pages that have
references to both President Obama and President Bush on the
You could search this way:
Or if you want to find pages that have just President Obama and
not President Bush then your search would be
If you are looking for sand
sharks search engines will give you results with the word sand
and sharks but if you use quotation marks around "
it will help narrow your search.
Using "~" (tilde)
before a search term yields results with related terms.
typing "50 miles in kilometers" or 100 dollars in Canadian
Use Google to do
math just enter a calculation as you would into your computer's
(i.e. * corresponds to multiply, / to divide, etc)
To find a time in a certain place type in
Time: Danbury, Ct
Just got a phone
call and want to see where the call is from Type in 3
digit # area code
Type any address into Google's main search bar for maps and
select the day of the week and the time of day
for the traffic forecast.
What are people Searching for and what Key words are they using
Search Query Trends
Google Insights Search Trends
Yahoo Alexa Web
You can learn even
more great search tips by visiting this website
Search Engine Watch
can also help with improving your Internet
More Amazing Numbers and Facts