[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Fwd: Re: [school-discuss] most frequently used words]
owner-schoolforge-discuss@schoolforge.net wrote:
> From: Michael Hall <olc@openlearningcommunity.org>
> To: "schoolforge-discuss@schoolforge.net"
> Subject: Re: [school-discuss] most frequently used words
>
> There are several lists of the 100 most used words in English kicking
> around schools. As a language/literacy specialist, I find them of very
> limited use in actually teaching people (kids or adults) to read, and
> software based on such lists is IMHO similarly limited.
>
> The problem is that literacy/reading involves far more than simply
> 'knowing words', or being able to recognise random, decontextualised
> words. This is not to say that word drills don't have a place in literacy
> education, as long as we don't lose sight of the fact that it is a
> relatively small place in the overall picture.
>
> It is also quite difficult to create any useful or meaningful
> extended language out of the top 100 English words ...
> 'grammatical' words such as articles,
> prepositions and pronouns are very common, while 'content' words such
> as nouns and verbs are relatively scarce. Authentic language uses networks
> of content words related to the field or topic of a text, not random
> lists. And without an authentic context involving 'non-top-100' words, the
> several nuances of meaning that are attached to many 'grammatical' words
> cannot be made explicit.
>
> So, I'm personally not a fan of top-100-words approaches to literacy.
> However,
> extensive work has been undertaken as part of the UK-based Cobuild project
> into the statistical analysis of word frequency in English. This project
> involved heavy computer usage to 'crunch' millions of words and produce
> very sophisticated analyses not only of word frequency but also of which
> words are statistically most likely to occur with and around other words..
> A Cobuild dictionary has been produced (not sure how useful it would be
> in this case) and a search on Google would probably yield something.
>
> Anyway, that's my 2 cents.
>
> Michael Hall
>
> On 29 Jan 2002, Dominique Broeglin wrote:
>
> > Hello,
> >
> > Try to look at Wordnet : http://www.cogsci.princeton.edu/~wn/online/
> > You may find a lot of tools like that in the field of knowledege
> > management and even more in information retrieval because a lot of
> > research is actually done on semantic analysis and retrieval.
> >
> > Cheers,
> > Dom
> > On Tue, 2002-01-29 at 00:09, Jeremy C. Reed wrote:
> > > I am looking for some easy ways to figure out the most commonly used
> > > words (in English).
> > >
> > > But, I would like to categorize them by nouns, verbs, article, pronouns,
> > > conjunctions, etc.
> > >
> > > Does anyone know of any dictionary software that can be used on an Unix
> > > command-line that can help?
> > >
> > > Such as some tool like:
> > >
> > > $ the-dictionary -t frog
> > > noun
> > > $ the-dictionary -t ahdsjkhgfe
> > > [not in dictionary]
> > > $
> > >
> > > (I already can build a list of frequently used words from miscellanous
> > > emails, and HTML and txt docs on my system.)
> > >
> > > My plan is to build categorized lists of top words for reading practice.
> > >
> > > Jeremy C. Reed
> > > http://www.reedmedia.net/
> > >
> > > p.s. for example, frequently used words (not categorized):
> > >
> > > 7.6% the
> > > 3.0% to
> > > 2.6% a
> > > 2.5% of
> > > 2.3% and
> > > 2.0% is
> > > 1.7% in
> > > 1.5% for
> > > 1.0% this
> > > 1.0% that
> > > 1.0% be
> > > 0.8% with
> > > 0.8% if
> > > 0.7% or
> > > 0.7% it
> > > 0.7% are
> > > 0.6% you
> > > 0.6% on
> > > 0.6% not
> > > 0.6% by
> > > 0.6% as
> > > 0.5% from
> > > 0.5% an
> > > 0.4% will
> > > 0.4% which
> > >
> >
> >
>
> --
> -------------------------------------------------
> o p e n l e a r n i n g c o m m u n i t y . o r g
> MJ & MG Hall - admin@openlearningcommunity.org
> PO Box 8133 Alice Springs NT - ph/fax 08 89531442
> -------------------------------------------------
--
Doug Loss All I want is a warm bed
Data Network Coordinator and a kind word and
Bloomsburg University unlimited power.
dloss@bloomu.edu Ashleigh Brilliant