I just accidentally a coca-cola bottle. Is this bad?









==
The answer is yes. Yes, humor lies in analogy. As for the background, trust me, you don't really want to see it.
Posted by
Alexandre Linhares
at
2:17 PM
0
comments
Links to this post
Labels: analogy-making, cognitive science, science and ignorance, ultimatum bargaining
People in my research group are tired of hearing the phrases that either meaning is constructed out of experience, or it is constructed out of analogies. Over and over they have heard the phrase: "consider DNA: DNA is like a zipper, computer code, etc..." (I think I can safely assume readers of this blog know the drill.). Right after the DNA thing: I ask, now, what's this thing called a collateralized debt obligation that's just bringing the whole financial meltdown? And we get astonished faces as nobody has any good analogy (or anchors in semantic space--a rather technical name for it).
But now the fun has been spoiled. Check this out:
Crisis explainer: Uncorking CDOs from Marketplace on Vimeo.
Here are more: The credit crisis as Antartic expedition, & untangling credit default swaps. These are very worth of your time, unless you happen to be the George Soros amongst our readers.
Finally, here's a link Anna's just pointed out: The Metaphor Observatory.
Posted by
Alexandre Linhares
at
9:10 PM
0
comments
Links to this post
Labels: analogy-making, cognitive science, economic theory, editorial
With all the turmoil going on in the world of finance & economics, with the sh*t about to hit the fan, Lehman Brothers gone, and worst-case scenarios rapidly unfolding, slashdot should take a serious look at this. Slashdot needs a finance/economics section, for at least two reasons: (i) whatever happens in Finance and Economics will reflect on the tech/science scenario soon; (ii) there are loads of geeky Finance and quant Economists already there.
I, for one, .....ah, you get it.
Posted by
Alexandre Linhares
at
5:45 PM
0
comments
Links to this post
Labels: editorial, news analysis, technology
Capyblanca is sorry to report that physicists failed to end the world as promised. Luckily, the economists went ahead anyway and brought us the apocalypse as promised.
Now that everything belongs to the state and all decisions are centralized and we are all communists, fellow comrades, here's a third historical event, brought to you by those nasty capitalists: the first private-funded space launch.
Some of the most amazing 9 minutes of video ever! Except, of course, for those that include Scarlett Johansson.
I wish Carl Sagan was alive to see this. As one commenter said, despite it all, the future's looking better. Or at least it does from high up there.
So this is it for Homo Sapiens sapiens. Apocalipse duly scheduled for Wednesday. I hope the large hadron collider is on twitter, so we can follow the end of the world with the obligatory final "WTF?" tweet.
Hey, it was fun while it lasted, wasn't it?
==
"Introducing a technology is not a neutral act--it is profoundly revolutionary. If you present a new technology to the world you are effectively legislating a change in the way we all live. You are changing society, not some vague democratic process. The individuals who are driven to use that technology by the disparities of wealth and power it creates do not have a real choice in the matter."
Posted by
Alexandre Linhares
at
7:21 PM
1 comments
Links to this post
Labels: cognitive science, computer science, editorial, fluid concepts, history
By A. Linhares & Horacio Hideki Yanasse
Abstract. An implicit tenet of modern search heuristics is that there is a mutually exclusive balance between two desirable goals: search diversity (or distribution), i.e., search through a maximum number of distinct areas, and, search intensity, i.e., a maximum search exploitation within each specific area. We claim that the hypothesis that these goals are mutually exclusive is false in parallel systems. We argue that it is possible to devise methods that exhibit high search intensity and high search diversity during the whole algorithmic execution. It is considered how distance metrics, i.e., functions for measuring diversity (given by the minimum number of local search steps between two solutions) and coordination policies, i.e., mechanisms for directing and redirecting search processes based on the information acquired by the distance metrics, can be used together to integrate a framework for the development of advanced collective search methods that present such desiderata of search intensity and search diversity under simultaneous coexistence. The presented model also avoids the undesirable occurrence of a problem we refer to as the ‘ergometric bike phenomenon’. Finally, this work is one of the very few analysis accomplished on a level of meta-meta-heuristics, because all arguments are independent of specific problem handled (such as scheduling, planning, etc.), of specific solution methods (such as genetic algorithms, simulated annealing, tabu search, etc.) and of specific neighborhood or genetic operators (2-opt, crossover, etc.).
Accepted for publication in Applied Intelligence.
Posted by
Alexandre Linhares
at
8:40 PM
0
comments
Links to this post
Labels: computer science, Massive Parallelism, technology
There is a well known but ultimately ungrounded myth that deeper probing and understanding of the human brain and behavior threatens our agency and freedom. Here I'll show a fascinating brief presentation intending to conclude otherwise, how it largely increases our elbow room.
Historically, psychotherapy slowly co-evolved with the behavioral and life sciences of their age, generally with a lag of up to two decades. The Gestalt psychoterapists were influenced by trends of Gestalt psychology. The biopsychiatry revolution was only possible due to enormous research in neurochemistry. The relatively recent Schema Therapy emerged from the advances of second generation cognitive science. So, could the technological developments of the Decade Of The Brain grant an analogous novel contribution to mental well-being? Neuroscientist Christopher DeCharms shows that the answer is positive.
Functional magnetic resonance imaging is now sufficiently advanced to allow us to contemplate in real time the underlying neural correlates of our mental life. This level of organization of our behavior is no longer a black box that can only be interfered by neurosurgery. Present knowledge of the patterns of activation of your own brain states can be used to guide your next mental states.
Here's the research paper of deCharm's work on chronic pain patients subjected to Omneuron's fMRI technology (deCharm's company).
Posted by
Manuel Doria
at
5:09 AM
0
comments
Links to this post
Labels: Manuel Doria, neuroscience, technology
One problem down, N to go.
Posted by
Alexandre Linhares
at
5:06 PM
0
comments
Links to this post
Labels: categories, categorisation, Champagne, cognitive mechanisms, cognitive science, computer science, fluid concepts, Hofstadter, intuition, subcognitive
[Planning to keep this page updated]
=======
Dynamic sets of potentially interchangeable connotations: A theory of mental objects
Alexandre Linhares
Abstract: Analogy-making is an ability with which we can abstract from surface similarities and perceive deep, meaningful similarities between different mental objects and situations. I propose that mental objects are dynamically changing sets of potentially interchangeable connotations. Unfortunately, most models of analogy seem devoid of both semantics and relevance-extraction, postulating analogy as a one-to-one mapping devoid of connotation transfer.
Accepted commentary, Behavioral and Brain Sciences
=======
Search intensity versus search diversity: a false tradeoff?
Alexandre Linhares and Horacio Hideki Yanasse
Abstract - An implicit tenet of modern search heuristics is that there is a mutually exclusive balance between two desirable goals: search diversity (or distribution), i.e., search through a maximum number of distinct areas, and, search intensity, i.e., a maximum search exploitation within each specific area. We claim that the hypothesis that these goals are mutually exclusive is false. We argue that it is possible to devise methods that exhibit high search intensity and high search diversity during the whole algorithmic execution. It is considered how distance metrics, i.e., functions for measuring diversity (given by the minimum number of local search steps between two solutions) and coordination policies, i.e., mechanisms for directing and redirecting search processes based on the information acquired by the distance metrics, can be used together to integrate a framework for the development of advanced collective search methods that present such desiderata of search intensity and search diversity under simultaneous coexistence. The presented model also avoids the undesirable occurrence of a problem we refer to as the ‘ergometric bike phenomenon’. Finally, this work is one of the very few analysis accomplished on a level of meta-meta-heuristics, because all arguments are independent of specific problem handled (such as scheduling, planning, etc.), of specific solution methods (such as genetic algorithms, simulated annealing, tabu search, etc.) and of specific neighborhood or genetic operators (2-opt, crossover, etc.)
Accepted, Applied Intelligence
=======
Decision-making and strategic thinking through analogies
Alexandre Linhares
Abstract. When faced with a complex scenario, how does understanding arise in one’s mind? How does one integrate disparate cues into a global, meaningful whole? Consider the chess game: how do humans avoid the combinatorial explosion? How are abstract ideas represented? The purpose of this paper is to propose a new computational model of human chess intuition and intelligence. We suggest that analogies and abstract roles are crucial to solving these landmark problems. We present a proof-of-concept model, in the form of a computational architecture, which may be able to account for many crucial aspects of human intuition, such as (i) concentration of attention to relevant aspects, (ii) how humans may avoid the combinatorial explosion, (iii) perception of similarity at a strategic level, and (iv) a state of meaningful anticipation over how a global scenario may evolve.
Under Review, Cognitive Systems Research
=======
Questioning Chase and Simon’s (1973) “Perception in Chess”
Alexandre Linhares & Anna Freitas
Abstract. We believe chess is a game of abstractions: pressures; force; open files and ranks; time; tightness of defense; old strategies rapidly adapted to new situations. These ideas do not arise on current computational models, which apply brute force by rote-memorization. In this paper I assess the computational models of CHREST and CHUMP, and argue that chess chunks must contain semantic information. This argument leads to a rather bold claim, as we propose that key conclusions of Chase and Simon’s (1973) influential study stemmed from a non-sequitur.
Under Review
=======
A note on the problem of inappropriate contextual ads
Alexandre Linhares, Paula Mussi França, & Christian Nunes Aranha
Abstract. A contemporary industry of growing significance is web advertising. Ad inserts are made automatically in these systems: engines access the content of a search or of a webpage and attempt to find, using advanced economic and statistical models, a “contextual” insert of maximum expected utility. In this work we present the problem of inappropriate contextual ads. We distinguish between three types of undesirable contextual ads: (i) non-contextual ads; (ii) token-substitution ads; and (iii) inappropriate contextual ads. Inserts can be extremely inappropriate: in fact, shocking, outrageous, and disrespectful. We denominate such cases as catastrophic contextual ads. Despite being relatively rare, these catastrophic inserts might occur in large absolute numbers. We identify a series of reasons, following recent studies from cognitive science, for such phenomena. Finally, we propose some tentative solutions to the problem.
Under Review
===========
Theory of constraints and the combinatorial complexity of the product mix decision
Alexandre Linhares
Abstract – The theory of constraints proposes that, when production is bounded by a single bottleneck, the best product mix heuristic is to select products based on their ratio of throughput per constraint use. This is not true for cases when production is limited to integer quantities of final products. We demonstrate four facts which go directly against current thought in the TOC literature. For example, there are cases on which the optimum product mix includes products with lowest product margin and lowest ratio of throughput per constraint time, simultaneously violating the margin heuristic and the TOC-derived heuristic. Such failures are due to the NP-hardness of the product mix decision problem, also demonstrated here.
Under Review
Greetings! This is my introductory entry and it’s a great honor to be able to contribute to this weblog - many thanks to Alexandre and the other colleagues from our local FARG group.
I’m an undergraduate student in Philosophy with a bit of (rather unsuccessful, unfortunately) baggage in Computer Engineering deeply interested in cognitive science and how its empirical research intersects with traditional problems in the philosophy of mind, my main obsession being human consciousness. I intend to provide interesting commentary, spanning from fields such as neuroscience, AI and evolutionary psychology.
There has been this year some media coverage over an ongoing and novel AI research program, the RASCALS (acronym for Rensselaer Advanced Synthetic Character Architecture for “Living” Systems) cognitive architecture. Based on the Department Of Cognitive Science at the Rensselaer Polytechnic University, RASCALS was remarkable in deploying over the virtual environment of the famous massively multiplayer online game/social networking community Second Life two functional avatars Eddie, a 4 year old boy and Edd Hifend, a robot. Here's Eddie during a demo, facing a well known experiment in developmental psychology:
RASCALS is logic based-AI with some unconventional twists. According to the researchers's Game On conference paper, the main ambition behind RASCALS is designing in a relatively quick pace autonomous agents that satisfy contemporary theories of personal identity, which is quite a hard task.How does one design a synthetic person that doesn't merely perform evil acts but is genuinely evil? What does it take for an autonomous virtual agent to truly have a moral character or at least a toy model of it? Merely exhibiting convincing complex evil behavior, something that several videogame characters can already accomplish, is insufficient. Moral character demands advanced conceptualization skills, rich knowledge representation and a belief system besides behavioral dispositions. The main theoretical virtual persona mentioned in the article, referred to as E, is modeled after a fictional cinematic post-apocalyptic warlord drawn from prominent examples of antagonists in the entertainment industry (I suppose General Bethlehem from the motion picture The Postman is a good candidate). So, how to make E embody evilness? The strategy of the design team involves an adequate formal definition of evil, a way to deal with propositions such as the agent's goals, beliefs and desires in an extremely expressive fashion, a contextually appropriate knowledge base, sufficient fluency in a natural language and a believable presentation (the RPI team designed for another demo a sophisticated facial expression system for head avatars).
The RASCALS logical inference system is pluralistic, encompassing probabilistic inference for a better grasp of human-like reasoning besides standard forms of logical inference. Following a well known (and virulently disputed) tradition in cognitive science and artificial intelligence, the architecture employs a language of thought, all cognitive phenomena are unified in a formal language, in that case a bundle of symbolic logic, first-order logic sufficing for some processes while higher-level cognitive procedures use complementary logics like epistemic and deontic logics. Communication is possible due to formal mentalese being converted via a Natural Language Module into plain english in a highly sophisticated process.
Here comes another distinctive feature of RASCALS, the epistemic robustness of its agents. Merely reaching out via logical analysis the correct answers to a certain query provided in natural language is insufficient. For actual understanding (or quasi-understanding, being charitable to the difficulties associated with intentional states) those answers should be able to be justified by the agent. The implication is that for every answer in natural language there is a corresponding justification in formal logic which can also be translated into natural language, based on the agent's knowledge base and its reasoning capabilities.
Next October, a RASCALS agent with a very large, still in development, knowledge base will run on Blue Gene and interact with humans. However unimpressive those results may turn out to be (although optimists abound), this cognitive architecture alongside the new wave of digital entertainment industry are refreshing and regaining the interest and enthusiasm that once permeated AI researchers on its ambition to face and model realistically human-like behavior and cognition, in this case functionalizing propositional attitudes.
Posted by
Manuel Doria
at
3:29 PM
1 comments
Links to this post
Labels: cognitive mechanisms, computer science, Manuel Doria, technology
Follow Zimbabwe, where 100 Billion dollars can get you three full eggs.
May god bless America.
Hat tip.
In 1995, Douglas Hofstadter wrote: "A visit to our Computer Science Department by Dave Slate, one of the programmers of Chess 4.6, at that time one of the world's top chess programs, helped to confirm some of these fledgling intuitions of mine. In his colloquium, Slate described the strict full width, depth-first search strategy employed by his enormously successful program, but after doing so, confided that his true feelings were totally against this type of brute-force approach. His description of how his ideal chess program would work resonated with my feelings about how an ideal sequence-perception program should work. It involved lots of small but intense depth-first forays, but with a far greater flexibility than he knew how to implement. Each little episode would tend to be focused on some specific region of the board (although of course implications would flow all over the board), and lots of knowledge of specific local configurations would be brought to bear for those brief periods."
That was, of course, many years before I would meet Doug.
How do chess players make decisions? How do they avoid the combinatorial explosion? How do we go from rooks and knights to abstract thought? What is abstract thought like? These are some of the questions involving the Capyblanca project. The name, of course, is a blend between José Raoul Capablanca, and Hofstadter's original Copycat Project implemented by Melanie Mitchell, which brought us so many ideas. Well, after almost 5 years, we have a proof-of-concept in the form of a running program, and we are GPL'ing the code, so interested readers might take it to new directions which we cannot foresee. Some instructions are in the paper, and feel free to contact me as you wish.
The manuscript is under review in a journal, and a copy of the working paper follows below. Interested readers might also want to take a look at some of our previous publications in AI and Cognitive Science:
(i) Linhares, A. & P. Brum (2007), "Understanding our understanding of strategic scenarios: what role do chunks play?", Cognitive Science, 31, pp. 989-1007.
(ii) Linhares, A. (2005), "An active symbols theory of chess intuition", Minds and machines, 15, pp. 131-181.
(iii) Linhares, A. (2000), "A glimpse at the metaphysics of Bongard Problems", Artificial Intelligence, Elsevier Science , 121 (1-2), pp. 251-270.
Any feedback will be highly appreciated!
--Alex
Posted by
Alexandre Linhares
at
4:45 AM
0
comments
Links to this post
Labels: chess, cognitive mechanisms, cognitive science, computer science, psychology, technology
Slashdot, my favorite L337 geek hangout, is discussing an interview with DugHof. The discussion is actually pretty cool, the long mentions of "the singularity that is Kurzweil" notwithstanding.
Though Doug usually dismisses hacker culture, I don't, and I think we should really welcome our new slashdot overlords. Two basic reasons here, beyond the whole power to the people cliché: first, some /. discussions are really worthwhile, and some participants really bring very insightful analysis in their comments--actually, a great way for learning about all things technical is, right after the obvious wikipedia lookup, by googling "site:slashdot.org whatever you're after, dude" and catching up with the discussions. And who knows? Maybe one day this blog will even be slashdotted. That would be nice for our pagerank and world domination plans--which bring me to the second reason.
Now the second reason is a serious one. As progress in FARG architectures evolves, we will need more and more lookups in the most cutting edge stuff, such as GPGPU or reflection. A general FARG framework is essentially an operating system, from the inside and from the outside. From the inside it packs application and problem loaders, various types of memory management (external, working memory, semantic memory, episodic memory, etc), task allocation and scheduling, and parallel multiprocessing. From the outside, it is also like an operating system, enabling new kinds and types of "FARG apps". This is, in fact, the coolest operating system to be working with, and I am astonished that companies like Microsoft or Sun or IBM just plainly do not know what this is all about. We could have some serious long-term contributions to computer science, yet, sometimes, it feels that even with all geekdom love that Doug eventually gets, the word in FCCA and later works is yet to be spread.
Or, to put it in /. terms, I feel that FARG==new (PARC). If you don't agree, then; seriously, You must be new here.
Posted by
Alexandre Linhares
at
10:43 AM
1 comments
Links to this post
Labels: computer science, Hofstadter
![]() | ![]() |
Surprisingly the blob on the right is identical to the one on the left after a 90deg rotation.
In the absence of enough information of object's identity one search for contextual evidence to force fitting categorization with respect to the world regularities.
As we seen before in this blog, the contextual cognitive module might be unique and it acts as the same for every human task. This is an image processing example, but it could be a natural language processing example.
Even when objects can be identified via intrinsic information, context can simplify the object discrimination by cutting down on the number of object categories, scales and positions that need to be considered.
http://people.csail.mit.edu/torralba/IJCVobj.pdf
Posted by
Christian Aranha
at
2:41 PM
1 comments
Links to this post
Labels: Christian Aranha, cognitive mechanisms, fluid concepts
This is tomorrow's presentation at FGV. We're looking for ambitious undergrads who want to take a shot in making something meaningful. Hopefully, someone will be interested.
Posted by
Alexandre Linhares
at
7:06 PM
0
comments
Links to this post
Labels: class 1, computer science, semantic web, software design, technology
In the study, subjects were asked whether they would accept or decline another person's offer to divide money in a particular way. If they declined, neither they nor the person making the offer would receive anything. Some of the offers were fair, such as receiving $5 out of $10 or $12, while others were unfair, such as receiving $5 out of $23.
http://www.newsroom.ucla.edu/portal/ucla/brain-reacts-to-fairness-as-it-49042.aspx
(This page will be updated with further details as soon as soon as possible)
Hello world!
After reading this fantastic book and playing with this, I think one good way to proceed is to open-source some parts of a FARG framework which are not its core, but are extremely useful and everyone could benefit from them.
I'm thinking first about a slipnet viewer. A java class that receives a list of nodes and links, and creates a nice view of the ongoing slipnet at any point in time. A node might consist of its activation levels and a bitmap to display inside the node (sometimes we may want to display something other than a string), while a link might include just the nodes it connects, (perhaps) a direction, and a string (to show up distances and for those with IS-A beliefs).
The class would get this information and create another bitmap, now with a beautiful view of the current slipnet: close nodes appear close to each other, distant nodes appear distant, and their activity levels are displayed. From my past life in combinatorial optimization, I have a hunch that this is NP-hard, so we may be resorting to some heuristic that works.
It should be in java, to run in everybody's machine, and also because everyone knows java and could either make a call to it from their own weirdo language or rewrite the code for their project.
In this initial stage, no windows or anything fancy should be done. Just get the data in and output a bitmap with the slipnet. But if our collaboration works, we could go bigger, triggering a window in a new thread and having a great display running in true parallel style. That would, I think, be a first step that everyone would benefit from.
This is small stuff, of course, but it's annoying to redo it everyday in every single project. It takes some time to do, and distracts from the core issues. Our productivity will rise. So, as Micheal Roberts once said, instead of having "obsessive geniuses" working under the basement, we should finally stop doing the same things over and over again. We should finally start collaborating like a small research group.
Or like a start-up company.
Posted by
Alexandre Linhares
at
2:24 AM
0
comments
Links to this post
Labels: fluid concepts, technology
Here's the email I've received from HP's Upline program.
On Sat, Apr 19, 2008 at 12:46 AM, HP Upline Paypal Notificationswrote:
Dear HP Upline Service subscriber,
On Thursday, April 17th, HP suspended operation of the HP Upline Service. We fully anticipate that suspension of the Upline Service will be temporary and short in duration, and will notify you when the Upline Service is operational again.
Please accept our sincere apology for this unanticipated interruption of your access to the Upline Service. We appreciate your patience as we launch this new service, and are working hard to minimize inconvenience caused by this service interruption.
If you are a resident of the United States, your subscription will remain in effect and you will be able to continue using the Upline Service for the duration of your subscription period once the Upline Service is operational again. Thank you for your patience, and we look forward to providing you with the HP Upline Service.
If you are not a resident of the United States, we regretfully must inform you that the initial launch of the HP Upline Service was intended for United States residents only. Unfortunately, our filtering tools did not adequately screen for subscribers residing outside of the United States. We thank you for your early adoption of the Upline Service, and look forward to being able to provide the HP Upline Service to you when we launch it in your country of residence. Since the HP Upline Service is presently offered for use within the United States only, we will be discontinuing your current subscription. After we notify you that the Upline Service is operational again, you will have a limited period of time to access and download files that you have uploaded onto the HP Upline Service servers. After that time period, you will no longer have access to your present HP Upline Service account. If you would like to be contacted by us when the HP Upline Service is made available in your country of residence, please send us an email at help@upline.com. We apologize for any inconvenience.
Sincerely,
The HP Upline Team
Posted by
Alexandre Linhares
at
1:48 AM
1 comments
Links to this post
Labels: shortsightedness, technology
...and the speed of change is accelerating... and I would like to invite readers in Rio de Janeiro to our Pangea day Broadcast.
Wahhabism is slowly going down and out...
Technology which costs thousands and takes years to develop goes for 50 bucks and is developed in 5 months...
The gigantic exodus toward cities and mega-cities might actually be a good thing...
The Pentagon might learn something from failure...
And this might be just a temporary fad, or a huge turning point...
Mike Arrington is proposing a new, mostly phone-based, social network. It's really a great reading. The basic idea is that you could browse people on the go--and find out who's around you in a restaurant or other places. You would broadcast your profile and receive broadcasts of other people's profiles. With privacy settings, of course. I wrote about this on newsvine back in 2006.
There are 2 types of comments in techcrunch: “absurd!” or “awesome!”
I would bet that there is a strong correlation to the commenter’s age.
The people that say absurd! fall into some camps:
(i) "girls will never use it"
(ii) "only übergeeks will use it"
(iii) "are you a lunatic? What about government oversight?"
(iv) "there's no way to monetize it. Ads on a small phone screen?!?"
(v) "Don't you think we are getting geeker and geeker all the time? Don't you think we are looking more and more at screens all the time?"
These objections are wrong, and Micheal is absolutely right. This is a game changer, and is a huge billion dollar thing. I wrote about this in 2006 (back when I thought newsvine was going somewhere).
Here are my views on the objections people have placed there:
To those concerned with government oversight: That’s a serious issue, but the way to handle it is to guarantee users that no government entity of any kind can be allowed as a “business”, and no info will be sent to them, unless users explicitly agree. They might make fake profiles of users, but they won't be able to communicate with you. No email, phone, or other info should be broadcast (unless someone is really adventurous).
Why girls will use it: Girls will broadcast only minimum info; a photo and a name or even a nickname. But they will stalk the guys, looking at the photos, and videos, and resume, to find out who he is. To the ones that they like, they may enable their full profiles. To women, if I've learned anything about them at all, it is that it is more important to know who a guy is than his looks alone. Today, they look around and see only “random” guys, so appearance is the only factor once you're in a restaurant. Now, if you’re broadcasting info that shows that you have a future, girls take notice.
Why everyone will use it, not just übergeeks.
Ever heard that beautiful quote, “the future is unevenly distributed”?
These people are reasoning: this has no value for me, so I don’t want it.
But this is like email was in 1992. I had an account, but nobody had, so its value was zero. Take away my email(s) now and I can’t communicate. Early adopters will be geeks, as always, but soon the network effects will kick in and the value will increase rapidly (for everyone, including grandma). So these guys who are saying they’ll never use it are in for a kick when they’re standing on a line for 20 minutes and some server tells: “your luggage has been found, Mr Arrington”. “How, if he wasn’t even in this bloody lost luggage complaint line we’re standing?” “Well, he was broadcasting his info to our system (which is on the network and is the way to finance it).” So your phone picks up that you’re on a luggage complaint system, you click on it and fill up a form. The old fellows look like penguins on a line. And you get to be served first. Then they finally ‘get it’. The thing has value. Of course, we are getting geeker and geeker all the time. We will be looking at screens more and more. Yet, life will be smoother.
the iPhone is now the only phone where the user interface could be smooth, without having to press a precise sequence of 50 buttons to browse people around. But perhaps android will be another, and perhaps the other phone companies can catch up to it in some years.
About facebook: it should obviously get in on this right now or become MySpace face the consequences. But facebook should first change to include the many dimensions a person has.
* I have a PhD in computer science, and I’m working on computational cognitive models. I might want to meet people with similar pursuits; but that’s only part of someone’s dimensions;
* I am crazy about Hôtel Costes, or about In Search of Sunrise
* I am an associate member of the club of rome, and I would certainly like to meet fellow members if/when we are close by.
* I am a professor of management science, I might want to broadcast that info in some places, and not others
* I am an entrepreneur, and I’d like to meet other similar creatures (or broadcast that info) to stalking Venture Capitalists.
Facebook only offers one dimension; and that is a serious shortcoming; because you want to select the specific info that will be broadcast. You don’t want to broadcast your funny drunk party picts or serious business info everywhere (in a random bar here in Rio you could actually be kidnapped). But you might do it in a high-profile scenario.
Now, here’s how to monetize it: free for users; & businesses pay a relatively small fee. This could be attractive to businesses because they could attract people, by broadcasting their existence, bookmarking people and offering discounts and automatic self-serve.
Jeffreys has commented on how to monetize: “How do you monetize it? When you walk past a store with a sale you might be interested in, it tells you. Like Amazon’s recommendation engine. Most of the time you’ll ignore it, but it will alert you to something you want to buy often enough to pay for itself.”
Here’s some building on top of that: the device NEVER, NEVER, distracts you. It receives your info, and bookmarks you, but never sends you an email or call or anything spam-like. If you want to know why, read "on intelligence".
Suppose you’re an owner of a restaurant. You pay to access the service as a business. They some smoking hot girls come in, and you bookmark them as interesting for your place. Then, on slow, empty days, you have all these cooks and waiters you’re paying for, but no customers: you send out, to 40 chicks, an offer for $50 dollars, valid for 1 hour. If you have bookmarked hundreds of prospects, people will start appearing. And people in a restaurant attract more people.
But nobody should ever be bothered personally. Instead, they should have their own offers page: They go in that page and see offers for free drinks at place X, 80% discount (at an empty restaurant), 50% discount at an empty hotel, or 70% discount at an empty seat on a flight to New York. All of these offers have an expiration time. So you can take it, think about it for a while, or leave it.
WHY THIS IS ENORMOUSLY VALUABLE: because once a plane has took off, then each empty seat costs a load of money, but gets no revenue--basically, that's money down the drain. Businesses have excess capacity, and price flexibility helps in balancing that capacity with actual demand–an incredible economic incentive for businesses to pay something like $1000/year, or $1/customer bookmark/year, or maybe $1/100 offers, or more. So, in the long run, the winning network will be getting buckloads of money from thousands of businesses small and large, without annoying anyone.
And businesses will be happy: The problem of managing capacity, utilization or operation levels, and demand, will be minimized for businesses that jump in. Life will be smoother for businesses (better yield management), and for people (great personalized offers, no intruding ads).
This is one of the most promising ideas right now. In the long run, there will be only one social network people actually log in, and that network will be the one in which they can browse people. On the go.
By the way, there are many more ideas, and I would be considering your job offers now. See you soon, Zillionaires!
Posted by
Alexandre Linhares
at
10:07 PM
1 comments
Links to this post
Labels: computer science, social networks, technology
Each relation has some elements, each element usually has a role within that relation. Moreover, there is a function, which takes this elements and creates new elements. A relation finds items of certain kinds (their roles), and creates other items of certain kinds, maybe even with a particular value
For example, in NUMBO:
Multiplication: item1 (operand) item2 (operand) item3 (result)
COPYCAT:
SUCESSOR: item1 (letter_value) item2 (Letter_Value) Alphabetic_Distance(Item1,Item2)=1 (number);
CHESS: Attack: item1(piece) item2(piece) move_distance(item1,item2)=1(number) (attacker in item1) (attacked in item2)
This is why I've found the quote from DeMorgan so sinister.
Posted by
Alexandre Linhares
at
2:13 AM
0
comments
Links to this post
Labels: cognitive mechanisms, fluid concepts, psychology, technology
So here's the presentation I've done in FGV today. Unfortunately, we didn't get it on tape, so no sound. I hope it may still be of some value for those interested.
Posted by
Alexandre Linhares
at
10:53 PM
0
comments
Links to this post
Labels: analogy-making, bounded rationality, categorisation, chess, cognitive mechanisms, cognitive science, decision-making, game-theory, intuition, perception
I've been thinking about massively parallel FARG, distributed temperature, and distributed coderacks:
Now, whenever a codelet is about to change something up, why add it to the global, central, unique, coderack? I don’t see a good reason here, besides the “that’s what we’ve always done” one. If a codelet is about to change some structures in STM, why not have (i) a list (or a set, or a collection, etc.) of structures under question & (ii) create a list-subordinated coderack on the fly? Instead of throwing codelets into a central repository, they go directly to the places in which they were deemed necessary. There are multiple repositories for codelets, multiple coderacks.
Posted by
Alexandre Linhares
at
1:35 AM
0
comments
Links to this post
Labels: Coderack, fluid concepts, Massive Parallelism, technology, Temperature
Mike Arrington at TechCrunch is crying like a baby Scoble, as he faces upwards of 2400 unread emails.
Posted by
Alexandre Linhares
at
8:33 PM
1 comments
Links to this post
Labels: computer science, editorial, technology
Recently in FARG's internal mailing lists we have discussed hyperbole in cognitive science; and all the fantastic claims that numerous cognitive scientists make. Every would-be Dr. Frankenstein out there seems to claim to have grasped the fundamental theory of the mind, and in next year we will finally have the glorious semantic web, we will be translating War and Peace into Hindu in 34 milliseconds, we will be having love and sex with robots, and, of course, we will be able to download our minds into a 16GB iPhone and finally achieve humanity’s long-sought after ideal of immortality.
Doug Hofstadter, of course, has long been dismissing these scenarios as nothing short of fantastic.
I think it’s safe to say that, in these sacred halls of CRCC, we are Monkeyboy-Darwinist-Gradualists who really disgust “excluded middle theories”: Either something understands language or it doesn’t. Either something has consciousness or it doesn’t. Either something is alive or it isn’t. Either something thinks or it doesn’t. Either something feels pain or it doesn’t.
I guess it’s safe to say that we believe in gradualism. The lack of gradualism and the jump from interesting ideas to “next year this will become a human being” goes deeply against my views. So my take on the whole issue of grand statements in Cognitive Science is that much more gradualism is needed. People seem to be having enormously simplistic views of the human mind.
As gradualists, we do, however, believe in the longer-term possibility of the theories being developed and cognitive mechanisms being advanced and machines becoming more and more human-like.
In fact, Harry has even stopped (but note that “stopping” is temporary, and is different from “quiting”, or “leaving”) his work on Bongard problems. Harry feels that our work will lead to dreadful military stuff. In fact, it is already happening, as he points out, and here is an eerie example. (Look at how this thing escapes the near certain fall in the ice.)
This “baby” is called the BigDog, and, yes, it is funded by DARPA. So there we have it, Harry: already happening. The military will get their toys, with or without us.
And this is gradualism at its best. Remember: this thing is not an animal. It is not alive.
But is it just as mechanic as a toaster?
Posted by
Alexandre Linhares
at
3:14 PM
0
comments
Links to this post
Labels: editorial, science and ignorance, technology
I believe, and this is a central aspect of development in the Human Intuition Project Framework, that there are three types of connotations: properties, relations, and chunks.
A property is anything that has a value. It could be a numerical value, a name, or anything else.
A chunk is a mental object, holding stuff together. Any mental object is a chunk.
Finally, a relation maps from a set of (properties, chunks, and relations) to create new properties, chunks, or relations. It is very much like the term used in mathematical relations. And this quote from Augustus DeMorgan, mixing psychology and mathematics, is just eerie to my ears:
"When two objects, qualities, classes, or attributes, viewed together by the mind, are seen under some connexion, that connexion is called a relation."
Posted by
Alexandre Linhares
at
3:37 PM
0
comments
Links to this post
Labels: fluid concepts, technology
The Exorcist Economist is running a story (now on the cover) about the financial meltdown and the Fed's rate cut. Dramatic times. I've placed the following comment, and, if anything, I will be really popular as the apocalypse unfolds and we start to eat rats. Here's my top-rated comment, followed by some favorite ones:
linhares wrote:March 18, 2008 22:38Now take a look at this:
Ok. I am a little on the slow side. So let me get this straight.
The US is a country that lives on borrowing.
The dollar is falling like a skydiver.
Commodity prices are soaring, and lower US demand won't change much of that.
By cutting the rates, correct me if I'm wrong, those trillions of dollars held by the Chinese, Indians, Arabs, Brazilians, and so on, will lose value even faster.
So, if these countries ever decide to protect their (hard-earned) cash, they should switch. Perhaps to the new alternative in town, the Euro.
And if they switch, which they should rationally do, the dollar ceases to be the world standard, inflation in america skyrockets overnight, and the value of goods inside the usa becomes a huge unknown.
But of course I'm wrong. The best way to treat a (debt) alcoholic is to give it an ample supply of liquor, for sure.
Recommended (57)
Great Cthulhu wrote:March 19, 2008 17:14
Personally, I am doing everything I can to rack up over $1 billion in personal debt, knowing full well that the US government will bail me out, as I'll be someone "too big to let fail" at that point. The problem is in getting enough credit cards to max out. You'd think with all the junk mail those credit card companies send out, I'd have over $1 billion in my back pocket by now, but I don't. With a credit limit of even $1 million per card, I'd need a thousand of the things to hit my target debt. Most only start with $25,000-$100,000, depending on what fake information I've used to get free subscriptions to magazines that target corporate executives, and that means I'll need about 10,000-40,000 credit cards for my project.
I guess I should just face it. I'm too poor to matter to the Fed. Oh well... a dollar collapse will at least make illegal immigration a moot issue, leave the US unable to pay for its wars overseas, and will give me the opportunity to discover a new career catering to the wants and needs of foreign tourists here in the states... perhaps I could supplement my income as a taxi driver at nights and earn some precious Euros, Pounds, Canadian Dollars, and Pesos in my tips... that would be something!
Recommended (23)
cognate wrote:March 18, 2008 22:00========================
Ahhhh, the wonders of the welfare-warfare state.
Better brush up on your potato planting, chicken feeding, and goat milking skills - just like in Doctor Zhivago.
Recommended (11)
Posted by
Alexandre Linhares
at
4:31 PM
1 comments
Links to this post
Labels: editorial, news analysis
The Economist is finally mentioning Jeff Hawkins work, in its current technology quarterly.
Mr Hawkins's fascination with the brain began right after he graduated from Cornell University in 1979. While working at various technology firms, including Grid Computing, the maker of the first real laptop computer, he became interested in the use of pattern recognition to enable computers to recognise speech and text. In 1986 he enrolled at the University of California, Berkeley, in order to pursue his interest in machine intelligence. But when he submitted his thesis proposal, he was told that there were no labs on the campus doing the kind of work he wanted to do. Mr Hawkins ended up going back to Grid, where he developed the software for the GridPad, the first computer with a pen-based interface, which was launched in 1989.Unfortunately, the piece is much more focused towards the man than towards Numenta's work.
Hawkins is certainly right in his "grand vision", but he is also certainly right to stumble into 3 serious problems that will take decades to solve.
First, he believes "pattern-recognition is a many-to-one mapping problem". That is simply wrong, as I have pointed out in the journal "Artificial Intelligence", ages ago. If he is a rapid learner, he will backtrack from that mistake soon. Otherwise he may spend ages on this classic error.
Secondly, his HTM model is currently using a statistical model with numerous design decisions. That by itself would not be problematic if not for the fact that ALL nodes (and here we are talking about gigantic amounts of those) would be following precisely the same statistical rule. The problem with that approach is that the slightest, imperceptible error in a parameter setting or a design decision will propagate rapidly, and amplify into utter gibberish.
Finally, it is virtually impossible with current technology to "debug" NUMENTA's approach. We are talking about gigantic matrices filled with all kinds of numbers in each spot... how does one understand what the system is doing by looking at some tiny thousands (at most) cells at a time?
I have given PhD courses concerning "cognitive technology", and I do believe that a new information-processing revolution is going to hatch perhaps in a decade. However, we are dealing with much harder territory here than creating successful silicon valley startups. The tiniest error propagates throughout the network, and is rapidly amplified. It is impossible to debug with current technology. And some of his philosophical perspectives are simply plain wrong.
While I do think Hawkins will push many advances, including by firing up youngsters and hackers leaving web2.0, there are others which are building on a much more promising base (google, for instance, Harry Foundalis).
Posted by
Alexandre Linhares
at
8:09 PM
2
comments
Links to this post
Labels: cognitive mechanisms, cognitive science, technology
Directly from the pages of the "prophet".
Posted by
Alexandre Linhares
at
8:01 PM
12
comments
Links to this post
Labels: science and ignorance
Some of the things I've been thinking about concern this question: how to make FARG massively parallel? I've written about parallel temperature, and here I'd like to ask readers to consider parallel coderacks.
Like temperature, the coderack is another global, central, structure. While it only models what would happen in a massively parallel minds, it does constrain us from a more natural, really parallel, model. Though I'm not coding this right now, I think my sketched solution might even help the stale codelet problem Abhijit mentioned:
We need the ability to remove stale codelets. When a Codelet is added to the Coderack, it may refer to some structure in the workspace. While the codelet is awaiting its turn to run, this workspace structure may be destroyed. At the very least, we need code to recognize stale codelets to prevent them from running.Consider that most codelets fit into one of three kinds: (i) they can propose something to be created/destroyed, (ii) they can evaluate the quality of such change, and (iii) they can actually carry it out.
Posted by
Alexandre Linhares
at
3:33 AM
1 comments
Links to this post
Labels: fluid concepts, technology
We here at Capyblanca are cheering over our own Harvard girl; who would imagine?
More seriously, we are celebrating the thesis defense of Mrs Anne Jardim, on the ultimatum bargaining game. Anne is an economist, and she spent the last months completing her research at Harvard Law School. We would never miss the chance to poke some fun at her celebrate her achievements. Here's a peek at its conclusion.
==
Most of economic theory and the literature on decision-making rests upon the assumptions of rationality and maximization of utility. In this thesis, we have provided a review of the modern research literature concerning the ultimatum bargaining problem.
The ultimatum bargaining problem arises in asymmetric situations in which a known amount will be split between two actors--one of which is a proposer for the split, while the other, the responder, accepts or rejects the offer. While the proposer is in a better strategic situation, the responder has the power to block the deal, to the detriment of both proposer and responder. This is not only a recurring problem in applied game theory and economics, but also a theoretically interesting one.
It is recurring because it models a large type of ultimatum situations. It should arise in domains as diverse as biology to human relationships to economic behavior between firms to international relations. When a male marks its territory, that is a kind of ultimatum; it is up to other males to accept it or reject it by fighting. When companies fight publicly, they usually send ultimatum offers through the press: "Unless Apple is willing to alter pricing behavior, NBC will stay out of iTunes". In fact, in many kinds of conflicting interest scenarios, ultimatums are an important part of the bargaining process. The particular model studied here represents an important set of these situations, and is of great importance in the real world.
Moreover, it is also theoretically interesting, because humans do not respond as economic theory would predict. Quite the contrary: human behavior is enormously far from the expected rational behavior.
This fact has triggered an enormous amount of scientific interest in this game. Many different types of studies are being conducted now. On the table below we present a taxonomy/classification of such studies. This table characterizes our critical review of the literature.
There is not yet a consensus on why people deviate from the expected Nash equilibrium, but these deviations from rationality are informative about human cognition. Current economic theory is based on the normative model of decision-making: decision-making is treated as maximization of utility. However, if that cannot be expected to hold even in very simple scenarios, such as the one studied here, new mathematical models may eventually replace the standard "rational actor" model.
These new models should be as general and applicable as the standard rational actor. But they should also be psychologically plausible. As we have seen, progress in understanding ultimatum bargaining is steady. In the coming decade, as new data and new models are discussed, a consensus may form. As we have seen, ongoing research on ultimatum bargaining, ultimately, may turn out to bring sweeping changes into the nature of economic theory.
Posted by
Alexandre Linhares
at
5:12 PM
0
comments
Links to this post
Labels: behavioural economics, decision-making, economic theory
Posted by
Christian Aranha
at
7:02 PM
0
comments
Links to this post
Labels: challenge, Christian Aranha, math, psychology
Last week saw an immense burst of FARG activity. A new blog has been set up, as have other initiatives. It seems that Michael Roberts is now officially developing a Framework (and applying it to Copycat). More as story unfolds.
Posted by
Alexandre Linhares
at
10:09 PM
0
comments
Links to this post
Labels: editorial, fluid concepts
He can bring people together, but can he make history?
Posted by
Alexandre Linhares
at
12:16 AM
0
comments
Links to this post
Labels: editorial, news analysis
If something, anything, is "thinkable", then it is bottom-up; it can be seen or felt and change mental context (e.g., alter contents on the slipnet).
And anything thinkable, however abstract, can also be imagined--thus it also exerts top-down pressure.
A small thought for man, a big leap for FARG computational modeling.
Posted by
Alexandre Linhares
at
3:07 PM
0
comments
Links to this post
Labels: chunking, fluid concepts, technology
Take a look at a dolphin and a shark and think about convergent evolution.
I've of course read a bunch of times Michael Robert's ideas on a FARG core engine, encapsulating the essential from the accidental in a domain.
But after some email exchanges, I'm stunned to see that many of the ideas we're proposing on this website had also been in his vision. Which brings up the question:
Is it convergence? Are we both right to pose (i) domain-free codelets, (ii) distributed temperature, (iii) slipnet nodes with structure? Are we converging to the same ideas because these are, in a sense, the right ideas?
Or have crimes been committed? Have I simply stolen his blueprints and am now, years later, claiming that I've stumbled into them, and just feel like they're mine because time has passed and when I went back to the drawing board all I could see was what was already in my mind?
Under the advice of my prestigious law firm of Cravath, Swaine & Moore LLP, I plead not guilty.
First, I couldn't understand details of Michael's ideas back then. I had to study a lot of design patterns along the way in order to see a new kind of flexibility in software, and the full meaning of encapsulation and polymorphism. Years, later, having developed Capyblanca to a certain extent, I can appreciate the difficulties inherent in separating essence from accident in FARG. With this baggage I've stumbled upon so many similar ideas, like distributed temperature. He argues for distributed temperature, but he doesn´t mention explicitly why. And many of his and mine ideas are still a little bit different. (I've yet to convince him of connotation thing, if he doesn't grasp its reasons quite immediately.)
I seriously think this has been the product of convergent evolution. Which makes me optimistic.
We're on the right track.
Posted by
Alexandre Linhares
at
11:44 AM
0
comments
Links to this post
Labels: fluid concepts, technology
Here's a new commentary on a target article, to appear in Behavioral and Brain Sciences. The full piece is available through email.
==
Dynamic sets of potentially interchangeable connotations: A theory of mental objects
Alex Linhares
Abstract: Analogy-making is an ability with which we can abstract from surface similarities and perceive deep, meaningful similarities between different mental objects and situations. I propose that mental objects are dynamically changing sets of potentially interchangeable connotations. Unfortunately, most models of analogy seem devoid of both semantics and relevance-extraction, postulating analogy as a one-to-one mapping devoid of connotation transfer.
Posted by
Alexandre Linhares
at
10:27 PM
0
comments
Links to this post
Labels: analogy-making, chunking, cognitive science, computer science, psychology
What if codelets are only bossing and bitching around?
I mean, from what I get from all the code I've seen in so many projects, codelets actually do things. They work directly on representations. Sometimes they scout STM for something to be created; sometimes they build some stuff out there; sometimes they trigger other codelets, etc.
But what if they only sent signals? What if they only were bossing around?
This is something that the Starcat team has started, but can be done in a deeper way. The advantage of it would be simple: you could encapsulate all codelets, for any domain existing in the universe, and cleanly separate them from the accidental features of your problem.
But here comes Christian's Law, once again: "Language compiles everything".
Back to the compiler.
Damn!
Posted by
Alexandre Linhares
at
12:50 AM
0
comments
Links to this post
Labels: codelets, cognitive mechanisms, cognitive science, computer science, fluid concepts, software design, technology
Despite all the talk about Apple and Google, Adobe is the coolest company on earth. With flash, and soon, AIR, they are the backbone of the net. If you want to be stunned and see what they are up to; take a look at these two product lines:
Video. They are moving from low-quality (read: YouTube) videos to high-quality, high-def. In Flash! That is an amazing feat.
Check it out for yourself. Double-click for fullscreen video.
http://www.flashvideofactory.com/test/DEMO720_Heima_H264_500K.html
(Hat tip to an amazing flash video blog).
This is going to have an impact in TV broadcasting in the following years.
If that is not enough, check out Sprout, a new web2.0 company; also using flash, and soon, AIR. It's enormously powerful stuff, and enormously easy to use.
You really have to hand it to those guys. This is beautiful work. They are really changing the curve of the curve.
From Analogy-Making as Perception (By Melanie Mitchell), MIT Press.
Here is Melanie's classification of codelets:
DESCRIPTION BUILDING CODELETS
Posted by
Alexandre Linhares
at
7:52 PM
4
comments
Links to this post
Labels: fluid concepts, software design, technology
This blog is the funniest thing ever. If you go there, leave a comment to her.
I don't know about you monkeyboy Darwinists, but I'm sure as hell submitting papers to The Journal of Creation!
Misclassifications of Adenine and Guanine: Serious fraud of scientific evidence?
Dr. A. Linhares
Reverend, Universal Life Chuch
Abstract. In this article we present conclusive data showing that DNA base pairings of nucleotides--most specially some subtle effects involving pyrimidines--explains why both (i) the overall length of a DNA double helix determine the strength of the association between the two strands of DNA and (ii) how Eve was encouraged by a snake to led Adam eat of The Forbidden Fruit. Moreover, by showing that Adenine and Guanine have been intentionally misnamed with an explicit agenda against a clearer comprehension of the events surrounding His death and resurrection (a mischievous fraud for which its proponents shall repent), we demonstrate that such entities should have been characterized with the letters E and V. With our more modern terminology, we have been able to uncover 100 Billion instances of the naturally occurring "EVE" sequence in the genome of the species in which no one is free from sin. Needless to mention, it is no coincidence that 100 Billion is exactly the estimated number of galaxies in the observable universe, and even if supermassive black holes are found at the center of galaxies--a speculative, yet potentially possible finding--that would not in any statistically significant sense bear any effect on the data concerning the fact that He called Abraham and his progeny to be the means for saving all of humanity, or related phenomena.
So long, you fools! --Alex
That's the reason communism failed; nothing beats open markets and their incentive systems. Full article from Slate.
From a cognitive science perspective, the semantic web is still years and years and years away--at least a full decade. What I mean is the set of complex mechanisms that involve creating meaning, not the usual ridicule hyperbole out there.
Consider, for example the fact that when the TAM flight crashed in SP last July, the news were full of contextual ads suggesting readers to "Fly TAM".
Or maybe take a look of these contextual ads (hat tip to Digg):
The best ideas over this issue are by Bob French--though he doesn't particularly address 'contextual ads'--but the whole problem of meaning extraction from text databases in which semantic web engineers are falling upon. This paper is one of the funniest, and most intelligent, things I've ever read.
The "long tail" will always be algorithmic. The "fat head" will always be mainstream. The "middle ground" will be social. This naturally suggest a strategy for Yahoo! (which TechCrunch says is failing--and it just might be).
Yahoo! isn't mainstream media, nor algorithmic (like Google). From this point of view, I think what they should do becomes clear: They should strive to dominate the middle space.
Yahoo! should go beyond del.icio.us and acquire digg. It should subordinate all of its strategy to having all content, including ads, brought up by social voting. If an ad is buried, let it go; just like every piece of content. In the short-run, most likely, only ads from Apple or Ron Paul will appear; in the long-run, only good, socially targeted content should arise.
Meanwhile, algorithmic contextual ads will keep suggesting to stone people to death, to find debt, and to burn babies.
Posted by
Alexandre Linhares
at
2:34 PM
0
comments
Links to this post
Labels: fluid concepts, semantic web, technology