一般地,科研论文的写作包括以下几个步骤: 1. 选题(Pick a topic); (已完成) 2. 开始研究(Research); (已完成) 3. 查找、收集资料(Finding Sources); (建设中...) 4. 科研诚信(Plagiarism); (建设中...) 5. 作笔记(Notes); (建设中...) 6. 尽力说服(Persuasive); (建设中...) 7. 初稿(Fisrt Draft); (建设中...) 8. 引用、引证和插图(Citing); (建设中...) 9. 格式排版(Formating); (建设中...) 10. 定稿(Final Draft). (建设中...) 目前正在建设中,1已经发布,多多关注。 |
2009年4月30日星期四
如何写作科研论文系列之一----01 ,02已完成
2009年4月14日星期二
(70)XML学习手册
为什么要使用X M L
X M L代表扩展标记语言(Extensible Markup Language) ,是由World Wide Web Consortium(W 3 C)的X M L工作组定义的。这个工作组是这样描述该语言的:“扩展标记语言( X M L)是S G M L的子集,其目标是允许普通的 S G M L在We b上以目前H T M L的方式被服务、接收和处理。X M L被设计成易于实现,且可在S G M L和H T M L之间互相操作。 ”这段话是从正式的X M L规范1 . 0版本中引述的,该规范是 X M L工作组在1 9 9 8年2月完成的。你可以在W 3 C位于h t t p : / / w w w. w 3 c . o rg / T R / R E C - x m l的We b站点上阅读整篇文档。正如所看到的,X M L是一种专门在World Wide We b上传递信息的语言,就像H T M L(超文本标记语言)一样(自从 We b出现以来,H T M L已经成为了创建We b页的标准语言) 。因为我们已经有了H T M L,而且它已发展成一种似乎可以满足任何需要的语言,所以,你可能会疑惑:为什么我们还需要在We b上采用一种全新的语言呢? X M L有什么新特性和不同之处?它有什么特有的优点和长处呢?它与 H T M L有什么关系?它是要替换还是增强 H T M L?最后,这个以X M L作为其子集的S G M L是什么,为什么我们不就用S G M L来创建We b页呢?本书将试图回答所有这些问题。
手册详细资料,可通过公用邮箱查询,有关公用邮箱的使用请查看置顶帖子!(69)国外经典教材《Computer Networks 4th》Andrew S. Tanenbaum简介
经典的书籍值得推广,本书相关的更多资料,可以通过我们的公用邮箱查看!有关公用邮箱的使用,请查看主页置顶帖子!
2009年4月7日星期二
(68)MatLab基础(中文版)资料收集整理完毕,发布
需要的请按置顶帖子方法与我们联系索取!
目录
1. 概 论... 13
2. 基础准备及入门... 21
3. 数值数组及其运算... 30
4. 字符串数组、元胞数组和构架数组... 52
5. 数值计算... 68
6. 符号计算... 118
7. 数据和函数的可视化... 139
8. M文件和面向对象编程... 176
9. SIMULINK交互式仿真集成环境... 198
10. 句柄图形... 225
11. 图形用户界面(GUI)制作... 243
12. MATLAB编译器和API 268
13. Notebook.. 288
附录A 索 引... 295
(66)MatLab学习笔记1_PDF格式
共10页!
下载地址:
http://www.fileupyours.com/view/237452/MatLab%20study%20notes_1.PDF
或者使用下述方法:(以上网站不是很稳定)
获取办法:使用我们公用的邮箱,为了有效利用本邮箱,此处只供急需者使用,请给我们发电子邮件,地址是xyangirl#gmail.com (为防止垃圾邮件,请手动把#换成@),我们会及时回复你的电子邮件,同样为了防止垃圾邮件,请尽量使用Gmail邮箱,并登录后,把本站设为“跟踪”点,我们会及时把最新内容发送给你!!
2009年4月6日星期一
2009年4月5日星期日
(63)Opnet相关资料已共享,需要的联系!
获取办法:使用我们公用的邮箱,为了有效利用本邮箱,此处只供急需者使用,请给我们发电子邮件,地址是xyangirl#gmail.com (为防止垃圾邮件,请手动把#换成@),我们收到邮件后会把公用邮箱地址和密码发送到你的电子邮件中,同样为了防止垃圾邮件,请尽量使用Gmail邮箱,并登录后,把本站设为“跟踪”点,我们会及时把最新内容发送给你!!
(60)英语小幽默
1
Love's Philosophy
The fountains mingle with the river
And the rivers with the ocean,
The winds of heaven mix for ever
With a sweet emotion.
Nothing in the world is single,
All things by a law divine
In one another's being mingle--
Why not I with thine?
See the mountains kiss high heaven
And the waves clasp one another;
No sister--flower would be forgiven
If it disdain'd its brother.
And the sunlight clasps the earth,
And the moonbeams kiss the sea--
What are all these kissings worth,
If thou kiss not me?
2 Relaxation:
If I Am a Manager
One day in class, the teacher assigned his students to write a composition: If I Am a Manager. All the students began to write except a boy. The teacher went to him and asked the reason. " I am waiting for my secretary," was the boy's answer.
3 Relaxation:
The Most Beautiful Thing I Ever Saw
The students in the composition class were assigned the task of writing an essay on "The most beautiful thing I ever saw." The student, who, of all the members of the class, seemed the least sensitive to beauty, handed in his paper first with astonishing speed. It was short and to the point. He had written: "The most beautiful thing I ever saw was too beautiful for words."
4
My Last Will
------Unknown
My will is easy to decide,
For there is nothing to divide,
My kin don't need to fuss and moan----
"Moss does not cling to rolling stone."
My body? --Oh-- If I could choose,
I would to ashes it reduce,
And let the merry breeze blow
My dust to where some flowers grow.
Perhaps some fading flower then
Would come to life and bloom again.
This is my last and final will,
Good luck to all of you.
(59)<笑一笑>当代史记-80后传
(58)OPNET 14.5说明文档中文版发布页(进度表)
2009.4.5: 1-14页:
下载地址: http://www.fileupyours.com/view/237452/opnet14.5%20document%20zh.cn%281-15%29.pdf
下载说明:
下载方法有两个,由于没有FTP或者WWW服务器空间,所以只能放在免费空间中。
第一个办法:上传到国外的http://www.fileupyours.com/,稳定性不是很好,下载速度也一般下载地址如下所示;
第二个办法:使用我们公用的邮箱,为了有效利用本邮箱,此处只供急需者使用,如果用第一个办法无法下载,请给我们发电子邮件,地址是xyangirl#gmail.com (为防止垃圾邮件,请手动把#换成@),我们收到邮件后会把邮箱地址和密码发送到你的电子邮件中,同样为了防止垃圾邮件,请尽量使用Gmail邮箱,并登录后,把本站设为“跟踪”点,我们会及时把最新内容发送给你!!
其它一些有关OPnet、网络仿真的学习资料、论文、参考资料也可以到上述邮箱下载,欢迎需要的朋友来信索取邮箱地址!
2009年4月1日星期三
(57)Can we Increase our Intelligence 我们的智力能提升吗?
We’re often asked whether the human brain is still evolving. Taken at face value, it sounds like a silly question. People are animals, so selection pressure would presumably continue to apply across generations.
But the questioners are really concerned about a larger issue: how our brains are changing over time — and whether we have any control over these developments. This week we discuss intelligence and the “Flynn effect,” a phenomenon that is too rapid to be explained by natural selection.
It used to be believed that people had a level of general intelligence with which they were born that was unaffected by environment and stayed the same, more or less, throughout life. But now it’s known that environmental influences are large enough to have considerable effects on intelligence, perhaps even during your own lifetime.
A key contribution to this subject comes from James Flynn, a moral philosopher who has turned to social science and statistical analysis to explore his ideas about humane ideals. Flynn’s work usually pops up in the news in the context of race issues, especially public debates about the causes of racial differences in performance on intelligence tests. We won’t spend time on the topic of race, but the psychologist Dick Nisbett has written an excellent article on the subject.
Flynn first noted that standardized intelligence quotient (I.Q.) scores were rising by three points per decade in many countries, and even faster in some countries like the Netherlands and Israel. For instance, in verbal and performance I.Q., an average Dutch 14-year-old in 1982 scored 20 points higher than the average person of the same age in his parents’ generation in 1952. These I.Q. increases over a single generation suggest that the environmental conditions for developing brains have become more favorable in some way.
What might be changing? One strong candidate is working memory, defined as the ability to hold information in mind while manipulating it to achieve a cognitive goal. Examples include remembering a clause while figuring out how it relates the rest of a sentence, or keeping track of the solutions you’ve already tried while solving a puzzle. Flynn has pointed out that modern times have increasingly rewarded complex and abstract reasoning. Differences in working memory capacity account for 50 to 70 percent of individual differences in fluid intelligence (abstract reasoning ability) in various meta-analyses, suggesting that it is one of the major building blocks of I.Q. (Ackerman et al; Kane et al; Süss et al.) This idea is intriguing because working memory can be improved by training.
Felix Sockwell
A common way to measure working memory is called the “n-back” task. Presented with a sequential series of items, the person taking the test has to report when the current item is identical to the item that was presented a certain number (n) of items ago in the series. For example, the test taker might see a sequence of letters like
L K L R K H H N T T N X
presented one at a time. If the test is an easy 1-back task, she should press a button when she sees the second H and the second T. For a 3-back task, the right answers are K and N, since they are identical to items three places before them in the list. Most people find the 3-back condition to be challenging.
A recent paper reported that training on a particularly fiendish version of the n-back task improves I.Q. scores. Instead of seeing a single series of items like the one above, test-takers saw two different sequences, one of single letters and one of spatial locations. They had to report n-back repetitions of both letters and locations, a task that required them to simultaneously keep track of both sequences. As the trainees got better, n was increased to make the task harder. If their performance dropped, the task was made easier until they recovered.
Each day, test-takers trained for 25 minutes. On the first day, the average participant could handle the 3-back condition. By the 19th day, average performance reached the 5-back level, and participants showed a four-point gain in their I.Q. scores.
The I.Q. improvement was larger in people who’d had more days of practice, suggesting that the effect was a direct result of training. People benefited across the board, regardless of their starting levels of working memory or I.Q. scores (though the results hint that those with lower I.Q.s may have shown larger gains). Simply practicing an I.Q. test can lead to some improvement on the test, but control subjects who took the same two I.Q. tests without training improved only slightly. Also, increasing I.Q. scores by practice doesn’t necessarily increase other measures of reasoning ability (Ackerman, 1987).
Since the gains accumulated over a period of weeks, training is likely to have drawn upon brain mechanisms for learning that can potentially outlast the training. But this is not certain. If continual practice is necessary to maintain I.Q. gains, then this finding looks like a laboratory curiosity. But if the gains last for months (or longer), working memory training may become as popular as — and more effective than — games like sudoku among people who worry about maintaining their cognitive abilities.
Now, some caveats. The results, though tantalizing, are not perfect. It would have been better to give the control group some other training not related to working memory, to show that the hard work of training did not simply motivate the experimental group to try harder on the second I.Q. test. The researchers did not test whether working memory training improved problem-solving tasks of the type that might occur in real life. Finally, they did not explore how much improvement would be seen with further training.
Research on working memory training, as well as Flynn’s original observations, raise the possibility that the fast-paced modern world, despite its annoyances (or even because of them) may be improving our reasoning ability. Maybe even multitasking — not the most efficient way to work — is good for your brain because of the mental challenge. Something to think about when you’re contemplating retirement on a deserted island.
(56)Computers vs. Brains 电脑与人脑的对战
Brains have long been compared to the most advanced existing technology — including, at one point, telephone switchboards. Today people talk about brains as if they were a sort of biological computer, with pink mushy “hardware” and “software” generated by life experiences.
However, any comparison with computers misses a messy truth. Because the brain arose through natural selection, it contains layers of systems that arose for one function and then were adopted for another, even though they don’t work perfectly. An engineer with time to get it right would have started over, but it’s easier for evolution to adapt an old system to a new purpose than to come up with an entirely new structure. Our colleague David Linden has compared the evolutionary history of the brain to the task of building a modern car by adding parts to a 1925 Model T that never stops running. As a result, brains differ from computers in many ways, from their highly efficient use of energy to their tremendous adaptability.One striking feature of brain tissue is its compactness. In the brain’s wiring, space is at a premium, and is more tightly packed than even the most condensed computer architecture. One cubic centimeter of human brain tissue, which would fill a thimble, contains 50 million neurons; several hundred miles of axons, the wires over which neurons send signals; and close to a trillion (that’s a million million) synapses, the connections between neurons.
The memory capacity in this small volume is potentially immense. Electrical impulses that arrive at a synapse give the recipient neuron a small chemical kick that can vary in size. Variation in synaptic strength is thought to be a means of memory formation. Sam’s lab has shown that synaptic strength flips between extreme high and low states, a flip that is reminiscent of a computer storing a “one” or a “zero” — a single bit of information.
But unlike a computer, connections between neurons can form and break too, a process that continues throughout life and can store even more information because of the potential for creating new paths for activity. Although we’re forced to guess because the neural basis of memory isn’t understood at this level, let’s say that one movable synapse could store one byte (8 bits) of memory. That thimble would then contain 1,000 gigabytes (1 terabyte) of information. A thousand thimblefuls make up a whole brain, giving us a million gigabytes — a petabyte — of information. To put this in perspective, the entire archived contents of the Internet fill just three petabytes.
To address this challenge, Kurzweil invokes Moore’s Law, the principle that for the last four decades, engineers have managed to double the capacity of chips (and hard drives) every year or two. If we imagine that the trend will continue, it’s possible to guess when a single computer the size of a brain could contain a petabyte. That would be about 2025 to 2030, just 15 or 20 years from now.
This projection overlooks the dark, hot underbelly of Moore’s law: power consumption per chip, which has also exploded since 1985. By 2025, the memory of an artificial brain would use nearly a gigawatt of power, the amount currently consumed by all of Washington, D.C. So brute-force escalation of current computer technology would give us an artificial brain that is far too costly to operate.
Compare this with your brain, which uses about 12 watts, an amount that supports not only memory but all your thought processes. This is less than the energy consumed by a typical refrigerator light, and half the typical needs of a laptop computer. Cutting power consumption by half while increasing computing power many times over is a pretty challenging design standard. As smart as we are, in this sense we are all dim bulbs.
A persistent problem in artificial computing is the sensitivity of the system to component failure. Yet biological synapses are remarkably flaky devices even in normal, healthy conditions. They release neurotransmitter only a small fraction of the time when their parent neuron fires an electrical impulse. This unreliability may arise because individual synapses are so small that they contain barely enough machinery to function. This may be a trade-off that stuffs the most function into the smallest possible space.
In any case, a brain’s success is not measured by its ability to process information in precisely repeatable ways. Instead, it has evolved to guide behaviors that allow us to survive and reproduce, which often requires fast responses to complex situations. As a result, we constantly make approximations and find “good-enough” solutions. This leads to mistakes and biases. We think that when two events occur at the same time, one must have caused the other. We make inaccurate snap judgments such as racial prejudice. We fail to plan rationally for the future, as explored in the field of neuroeconomics.
Still, engineers could learn a thing or two from brain strategies. For example, even the most advanced computers have difficulty telling a dog from a cat, something that can be done at a glance by a toddler — or a cat. We use emotions, the brain’s steersman, to assign value to our experiences and to future possibilities, often allowing us to evaluate potential outcomes efficiently and rapidly when information is uncertain. In general, we bring an extraordinary amount of background information to bear on seemingly simple tasks, allowing us to make inferences that are difficult for machines.
If engineers can understand how to apply these shortcuts and tricks, computer performance could begin to emulate some of the more impressive feats of human brains. However, this route may lead to computers that share our imperfections. This may not be exactly what we want from robot overlords, but it could lead to better “soft” judgments from our computers.
This gets us to the deepest point: why bother building an artificial brain?
As neuroscientists, we’re excited about the potential of using computational models to test our understanding of how the brain works. On the other hand, although it eventually may be possible to design sophisticated computing devices that imitate what we do, the capability to make such a device is already here. All you need is a fertile man and woman with the resources to nurture their child to adulthood. With luck, by 2030 you’ll have a full-grown, college-educated, walking petabyte. A drawback is that it may be difficult to get this computing device to do what you ask.
We’re grateful to Olivia for the opportunity to write these four columns. Our topics