Monday, September 26, 2005

Foucault

They say a good place to start is the beginning. I am going to begin at a bad place because I am not quite sure I understood the beginning of Foucault's 'What Is an Author?' (The reasons why he found this an important question to answer, as well as the background for when and how the concept of 'the death of the author' first came up, are sketchily dealt with in the article. I think these should be things we discuss in class this week.)

The concept of an 'author function' as defined in the essay is what seemed its most relevant part in the context of the classes we have had so far. Foucault makes the point that the author function does not refer only or completely to 'the person who wrote that book.' Of the 4 characteristic traits of author functions that he lays out, the first three, at least, are strongly tied in to our discussions in class so far, albeit in a slightly altered context.

The first trait he describes originates from the observation that discourses are 'objects of appropriation' and that, in fact, the author function is connected to, and shaped by, several institutions. The form of ownership of a discourse is still largely shaped by institutions, despite claims to the contrary, and this is something we have been discussing in class as well, though more in the context of the web.

The second trait is that the effect of the author function on the discourse depends on the times, circumstances and culture that it is introduced into. The example of how different genres required an author name at different periods of time, I thought, tied in to the discussions we had on when we require the stamp of an institution (including journals , newspapers, banks) to take work seriously and what circumstances prompted us to feel that way.

The third trait described by the article is that author functions are constructed through a series of operations. Assignment 2 comes to mind! Foucault talks of the four criteria used historically to make sure that a 'work' is attributed to the correct individual. Using something similar, we relied on a whole range of cues to decide if we liked/believed a source.

Overall, I think, the article tries to point out that an author, his/her work and the context within which s/he operates are all intimately connected. It is important to make these connections to understand that we all carry the conception of an author when we read a work. It is unlikely that this will change completely in the future or that there will be a time when a work will carry meaning meaning/authenticity by itself and with no links to individuals/institutions we call its author function. This is what I conclude. Unfortunately I did not understand the end of the article any more than I did the beginning. So I don't know if I am in agreement with Foucault. What do you think?

Wednesday, September 21, 2005

.xxx

The discussion of the .xxx domains and the 'responsible' pornographers seems to me the precursor to something like the Communications Decency Act being attempted again. So, follow this chain - we define that area over there to be for porn, and we handpick the people who get to go in there. Kid filtering software, etc, blocks .xxx. Then, definitionally, anyone who is providing porn but not on .xxx is 'irresponsible', and an easily targetable group. So, ban them - pornographers who aren't within the official hierarchy are trying to subvert filtering software and therefore are peddling porn to children. Then, impose more 'responsibility' restrictions on what is allowed in/to stay in .xxx, and you can effectively control 'porn' online.

Why that's bad... 'I know it when I see it', as was pointed out, depends a lot on who's doing the looking. At this point (at the end of the chain described), we have a clear deliniation between 'approved porn' and 'illegal porn', but the decisions about what gets allowed into the 'approved' category is difficult. Border cases (nude art, let's say to start with. further along, we'll end up with sex ed pages, etc) either have to classify themselves as 'pornography' and be roped into a corner and regulated, or run the risk of being nailed for being 'illegal'. Soon (in my paranoid vision), anything remotely edgy is either regulated porn, or illegal.

OK, a little over the top, and poorly thought through... Just some ideas.

Tuesday, September 20, 2005

New "Lemons" ?

Let me start out by saying that I found readings for this week to be quite insightful However, I had a few concerns with regards to the "Market for Lemons" article. Near the outset of this paper, the author, while distinguishing the "used" and "new" car markets, seems to treat both as equal in the subsequent analysis. The statement that struck me the most was the one referring to "lemons" both in the new and used car market. Personally, I have never heard anyone speaking of "lemons" in the new car market. Isn't that the whole point of buying a new car? When buying a new car, the consumer is guaranteed a fully functioning vehicle (i.e. a non-lemon or call it any other fruit/vegetable of your liking). If, God forbid, the car does show any severe problems, you can take it right back to the dealer and get it fixed/replaced under warranty. If the car's performance starts to deteriorate in 5,6 or 10 years, that's just normal wear and tear and , therefore, you cant call it a lemon. A Mercedes might last longer than a Daihatsu under similar conditions. That's a fact of life: you get what you paid for (usually).

So what's the point of all this semi-ranting? My point is, I believe "lemon" is a term the should only be used when referring to "bad quality" cars in the used market. And that's where the whole "lemon" problem arises. Its because, as the author states, you cannot tell if the car you are about to purchase is a lemon or not. In addition, there is usually no guarantee in the used market that your car will be functioning properly a few months(days?) after purchase, which exacerbates the issue.

Another point I wanted to make was that I believe consumers today are much more aware of the market due to new technologies (such as the Internet), making it very easy for them to research the car they want to buy (and the company/used car dealer). We have to keep in mind that the paper was written in 1970's, long before the commercialization and widespread availability of the Internet. Also, consumers tend to be more demanding these days. There is a growing market for "certified" used cars that offers a guarantee for quality, much like when buying a brand new car. Of course, this comes at a higher price. But it is definitely one way to avoid being fooled into buying a "lemon". By the way, does anyone know why they call it a "lemon"? Just curious.

Friday, September 16, 2005

Google on Googlebombing

So, this popped up on the Google blog the other day - very similar to what we were discussing in class on Weds. Interesting to hear it from them...

Thursday, September 15, 2005

Next week

I sent a note about next week's reading and class to the quality@sims list. If you have not subscribed, please do so.

Wednesday, September 14, 2005

Economics, Information, and Quality Readings

I must admit that my knowledge of economics is poor; I therefore had a hard time reading the formulas in the ‘'The Market for Lemons’ piece. With that aside, I did glean some useful bits of information about how the quality of goods affects a market, particularly from pages 495 to the end of the chapter. Akerlof’s primary theme is that “dishonest dealings tend to drive honest dealings out of the market.” The problem stems from a frequent inability to distinguish good wares from bad. As a consequence, reputable sellers offer guarantees and warranties on their products to counteract this effect. Additionally, people who have the ability to easily distinguish between good and poor quality items within a given market have a great opportunity for success. Akerlof describes how a financial lender armed with “intimate knowledge of those around him…is able, without serious risk, to finance those who would otherwise get no loan at all.”

These same principles can be applied to the Internet today. We know that there is an abundance of “rotten” information available on the Internet. Will this contamination drive out legitimate websites in time? Who hasn’t been frustrated when their inbox becomes clogged with Viagra spam and it becomes a nuisance just to find the one legitimate email from Mom? Or, worse, when Mom’s email gets put in the junk folder because it happens to contain the word “free.” Clearly, there is a huge opportunity for companies who have the ability to distinguish good quality information from bad – just as the lender did in Akerlof’s piece. These are the spam filter companies, the children’s Internet filter companies and the pop-up blockers of the world. But I still believe that we need more accountability on the side of the web publisher. Just as Dell provides a guarantee on their computers and the University of Phoenix is accredited by The Higher Learning Commission – content published on the web should be certified in some way by a 3rd party organization. Search engines could then add certifications to their ranking algorithm and provide users with better quality information.

In the second reading, Boyle discusses the impact that price searching has the market; specifically that it tends to reduce the disparity of prices. What instantly came to mind in reading this chapter was Froogle. Instead of having to laboriously call store to store or scour the Sunday Paper ad section for a low price, buyers can now go online and quickly do a search for the lowest price on a given product. One would think that prices would become equal as a result of this power but in fact disparities remain because not every buyer is searching and not all sellers are advertising at any given moment. In addition, seller reputation, product warranty, product quality and e-commerce usability often weigh in over price when prices are relatively close. There’s nothing like an unsecured payment information page to scare buyers away.

It was interesting how Boyle relates the expense of searching (effort, time) with the results it yields. I have a friend who is constantly searching for the lowest price. He spends hours online looking for coupons, rebates, sales, etc. Ebay, not surprisingly, is his favorite website. From my perspective, this kind of effort is rarely worth the $5 I’m going to save. But perhaps it’s because of people like him and sites like Ebay that the disparities in price have become small enough to satisfy lazy people like me.

Monday, September 12, 2005

Rottenness

Two newspaper articles that point towards the subject of rottenness in today's paper. One from the NY Times (registration required) looking at post-hurricane Web sites, including one claiming that Katrina was a man-made storm designed to take control of the earth. Just so we don't think such claims a Web-based problem, the Guardian has an article on bad science and pseudo science in newspapers.

Sunday, September 11, 2005

Plagiarism

The readings for this week touch on several provocative issues related to plagiarism. I will focus on the role of technology, and its effects on plagiarism within academia.

There is an underlying assumption within both readings that technology, and specifically the web, poses significant threats to the integrity of student research and coursework. The hypothesis is that the abundance of information and ease of access make plagiarism irresistible to more students. There seems to be a dearth of evidence presented in either article, however, to back up this conjecture. While the Final report of the UK Joint Information Systems Committee on Plagiarism presents some statistics on rising occurrences of intentional plagiarism, there is no definitive correlation to technology. The report admits that much of the evidence it presents stems from small-scale studies, or is circumstantial in nature.

Still, the concern that the changing nature of information availability may transform the values associated with student research seems valid. The report states that the majority of student plagiarism is inadvertent, due to misunderstanding of academic conventions and standards. The major problem seems to be a breakdown in communication, or disconnect in values, between students and the institutions. While the report suggests addressing this by making standards clear to students, the deeper problem of actually defining plagiarism is highly complex. It may be that the changing nature of information availability is one source of the disconnect between the long-standing academic values and the way students see information.

One possible solution suggested is for instructors to re-evaluate the structure of coursework, either in the way they design assignments, or by examining the students’ research process. Mallon decries this as "hand-holding," and worries that it will contribute to the further "infantilization of American college life" (247). However, by asking students to look critically at the process of creating research, through peer review groups for example, students may gain a much-needed skill in an age of information abundance. The types of assignments that seem to invite plagiarism are those that require the least amount of critical thinking: researching and reporting facts or the ideas of others. Assignments that require students to synthesize ideas and think originally, rather than focusing on canonical thought, would not only be less vulnerable to plagiarism, but would be more useful learning strategies in the "information age."


Mallon, Thomas. 2001. Afterword in Thomas Mallon, Stolen Words 2d Edition. San Diego: Harcourt, pp 239-253.

Deterring, Detecting, and Dealing with Student Plagiarism. Final report of the UK Joint Information Systems Committee on Plagiarism, Feb 2005.

Wednesday, September 07, 2005

quality mailing list

Please remember to sign up to the class mailing list. A message to majordomo at sims with "subscribe quality" in the body of the message--leave the title blank--will do it.

Monday, September 05, 2005

first week's readings


Varian, Hal & Peter Lyman. 2003. How Much Information -- Executive Summary


Nunberg, Geoffrey. 1996. Farewell to the Information Age, in G. Nunberg (ed.)
The Future of the Book
. Berkeley, CA: University of California Press. [read to page 21]


Brown, John Seely & Paul Duguid. 2000. "Limits to Information",
Chapter 1 in John Seely Brown & Paul Duguid, The Social Life of Information. Boston, MA: Harvard Business School Press: pp 11-34.




So, here goes the first week's blogging for InfoQual... I think the way I'm going to approach this is to give some thoughts first on each paper alone, and then see if I can thread them together in any sort of coherent way. Forgive me for the rambling nature of all this, please... My head is a confused place, and spilling its contents without a particular goal often gets weird :P

How Much Information?
A few things bothered me about this paper. One major technical question/problem was with what was chosen to be included as information to be counted... Why, for example, is a telephone call information and a discussion at the dinner table not? Direct person-to-person communication is only counted as information in this paper if there is some sort of intermediating technology. The telephone, instant messenger, email, snail mail, etc are all counted as information, and not communication. They do clearly state their choices of media and channels, but the reasoning behind these choices are less clear.
Another troublesome question is of what this all really means. They claim that over 3 years, the amount of 'new stored information grew about 30% a year', but I really don't know that I understand what that means, particularly to me. A more valuable quantity to know, I would think, would be how the amount of information that the average individual deals with regularly has changed. Do I process 30% more information each year? Or do most normal individuals still deal with about as much information? Along with that, I guess, comes the question of the abnormal individual - who's using all the rest of the information? I don't feel like I consume so dramatically much more information now than I ever did before, so what's really going on?

Farewell to the Information Age
I must confess that I had a lot of trouble understanding much of this paper... I think I just don't have the linguistics or philosophy background to get some of it. What I did get of it, though, was interesting. A question occured to me while reading about the distinction between 'intelligence' and 'information,' however. Nunberg asserts that it is a property of the modern conception of information that we, for example, accept something read in the sports section of the newspaper as 'true' without much question, but that such an acceptence would have been absurd for someone a few hundred years ago. It seems like the particular piece of information has a bit to do with this, though. He breaks down the degree of trust to give a newspaper from the 'things that are important to us' to the 'things that are important to them.' I would characterize this continuum differently, though - it seems to go from 'something factual there is no rational reason to lie about' to 'something that people care about enough to lie about'. The same continuum applies in other modes of communication as well. I might ask the guy next to me on the bus who won the game last night, and trust his answer without any knowledge of who he is, his motives, etc, because it's just a bit weird to think of someone lying about something like that. If I make a bet with someone about who will win a game, however, I might not trust them perfectly when they tell me I lost. I suppose the institutionalization of that makes a big difference, though... If I bet with the same person daily, I'd probably stop if they proved untrustworthy. Still, though, I think that there's an interesting idea in there - the amount of skepticism we have about a piece of information is probably very directly related to how much we care about that piece of information.

Limits to Information
Something I found really interesting in this chapter was the discussion of mass-customized goods - and the idea that it would still be the megacorp of your choice providing your individualized stuff. This tied in really neatly to the other two articles for me. In relation to the Varian article, this is a way of turning, for example, one pair of jeans into a million, without increasing the number of jeans that everyone has. It's an information explosion, basically, that has no corresponding overload. This also somewhat illustrates the 'all the news that fits' problem from the Nunberg article - of course I can't read every news article in the universe every day, but I can read my own personalized news feed. This might contain articles which are only interesting to me and a few other people, and would never be in a 'normal' newspaper, but for me are great. Both of these really brought to mind the 'long tail' concept, which if anyone isn't familiar with, I really recommend you read the article.

The Brown/Duguid article really brought out most clearly for me what I see as the theme of the readings, which I read as being a warning about the revolutionary fervor of many internet enthusiasts. The takeaway point of the readings that I saw was basically to view any claims about the internet (or really, anything) 'changing *everything* completely' with a great deal of skepticism, which is probably a good idea :) Much about the world is the same now as it was before the 'information age' began (well, I assume... for me it's been the 'information age' most of my life, so...). And I guess that's about all that I've got in me for the moment. Sorry for the length and confusion of this, but what do folks think, either about my points or about the articles in general?