全文获取类型
收费全文 | 12967篇 |
免费 | 21篇 |
国内免费 | 2篇 |
专业分类
教育 | 10045篇 |
科学研究 | 1263篇 |
各国文化 | 23篇 |
体育 | 402篇 |
文化理论 | 398篇 |
信息传播 | 859篇 |
出版年
2020年 | 14篇 |
2019年 | 36篇 |
2018年 | 2204篇 |
2017年 | 2127篇 |
2016年 | 1597篇 |
2015年 | 128篇 |
2014年 | 147篇 |
2013年 | 408篇 |
2012年 | 242篇 |
2011年 | 715篇 |
2010年 | 859篇 |
2009年 | 459篇 |
2008年 | 675篇 |
2007年 | 1168篇 |
2006年 | 104篇 |
2005年 | 415篇 |
2004年 | 472篇 |
2003年 | 391篇 |
2002年 | 157篇 |
2001年 | 50篇 |
2000年 | 60篇 |
1999年 | 24篇 |
1998年 | 19篇 |
1997年 | 35篇 |
1996年 | 13篇 |
1995年 | 23篇 |
1994年 | 18篇 |
1993年 | 26篇 |
1992年 | 21篇 |
1991年 | 26篇 |
1990年 | 10篇 |
1989年 | 15篇 |
1988年 | 11篇 |
1987年 | 16篇 |
1986年 | 26篇 |
1985年 | 23篇 |
1983年 | 17篇 |
1982年 | 12篇 |
1980年 | 13篇 |
1979年 | 18篇 |
1978年 | 14篇 |
1977年 | 9篇 |
1976年 | 11篇 |
1975年 | 11篇 |
1973年 | 10篇 |
1972年 | 14篇 |
1971年 | 10篇 |
1970年 | 10篇 |
1969年 | 14篇 |
1966年 | 9篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
Dustin Hillard Eren Manavoglu Hema Raghavan Chris Leggetter Erick Cantú-Paz Rukmini Iyer 《Information Retrieval》2011,14(3):315-336
The critical task of predicting clicks on search advertisements is typically addressed by learning from historical click data.
When enough history is observed for a given query-ad pair, future clicks can be accurately modeled. However, based on the
empirical distribution of queries, sufficient historical information is unavailable for many query-ad pairs. The sparsity
of data for new and rare queries makes it difficult to accurately estimate clicks for a significant portion of typical search
engine traffic. In this paper we provide analysis to motivate modeling approaches that can reduce the sparsity of the large
space of user search queries. We then propose methods to improve click and relevance models for sponsored search by mining
click behavior for partial user queries. We aggregate click history for individual query words, as well as for phrases extracted
with a CRF model. The new models show significant improvement in clicks and revenue compared to state-of-the-art baselines
trained on several months of query logs. Results are reported on live traffic of a commercial search engine, in addition to
results from offline evaluation. 相似文献
992.
Empirical modeling of the score distributions associated with retrieved documents is an essential task for many retrieval
applications. In this work, we propose modeling the relevant documents’ scores by a mixture of Gaussians and the non-relevant
scores by a Gamma distribution. Applying Variational Bayes we automatically trade-off the goodness-of-fit with the complexity
of the model. We test our model on traditional retrieval functions and actual search engines submitted to TREC. We demonstrate
the utility of our model in inferring precision-recall curves. In all experiments our model outperforms the dominant exponential-Gaussian
model. 相似文献
993.
We first present in this paper an analytical view of heuristic retrieval constraints which yields simple tests to determine
whether a retrieval function satisfies the constraints or not. We then review empirical findings on word frequency distributions
and the central role played by burstiness in this context. This leads us to propose a formal definition of burstiness which
can be used to characterize probability distributions with respect to this phenomenon. We then introduce the family of information-based
IR models which naturally captures heuristic retrieval constraints when the underlying probability distribution is bursty
and propose a new IR model within this family, based on the log-logistic distribution. The experiments we conduct on several
collections illustrate the good behavior of the log-logistic IR model: It significantly outperforms the Jelinek-Mercer and
Dirichlet prior language models on most collections we have used, with both short and long queries and for both the MAP and
the precision at 10 documents. It also compares favorably to BM25 and has similar performance to classical DFR models such
as InL2 and PL2. 相似文献
994.
As new media technologies and platforms emerge and take hold in our society, traditional publishers are wondering: What’s in this new content climate for me? The simple answer is: a lot. The digital world, mobile content delivery mechanisms and the public’s increasing comfort—even preference for—a media menu from which they can pick and choose what they want and how they want to receive it, brings exciting and potentially lucrative opportunities. For publishers who understand how to leverage their brand and create authentic, identifiable value in the eyes of the customer, risk can be reduced and new revenue streams built. Here are four best practices to position your publishing company for growth. 相似文献
995.
Christopher Platt 《Publishing Research Quarterly》2011,27(3):247-253
Updated from a presentation given at Biblionext.it in Rome in April 2011, this article will highlight The New York Public
Library’s success with e-books and other forms of popular e-content and our efforts to stay one step ahead of the consumer
shift from print reading to e-reading. Consumer e-reading is dominated by Amazon.com in the US, followed by one of the largest
chain bookstores, BarnesandNoble.com. The availability of digital versions of very popular titles, coupled with the explosion
of e-readers, tablets, and smartphones that are priced competitively and fairly easy to use, are helping move a lot of Americans
into the e-book world. Last month, Amazon.com announced they sold more e-books than physical books for the first time ever.
Print books are not going away, but our experience is that it is clear e-books are no longer just an extra format to offer,
they are integral to our future. 相似文献
996.
Felicitas Kraemer Kees van Overveld Martin Peterson 《Ethics and Information Technology》2011,13(3):251-260
We argue that some algorithms are value-laden, and that two or more persons who accept different value-judgments may have
a rational reason to design such algorithms differently. We exemplify our claim by discussing a set of algorithms used in
medical image analysis: In these algorithms it is often necessary to set certain thresholds for whether e.g. a cell should
count as diseased or not, and the chosen threshold will partly depend on the software designer’s preference between avoiding
false positives and false negatives. This preference ultimately depends on a number of value-judgments. In the last section
of the paper we discuss some general principles for dealing with ethical issues in algorithm-design. 相似文献
997.
998.
Bibliometric mapping of computer and information ethics 总被引:1,自引:0,他引:1
Richard Heersmink Jeroen van den Hoven Nees Jan van Eck Jan van den Berg 《Ethics and Information Technology》2011,13(3):241-249
This paper presents the first bibliometric mapping analysis of the field of computer and information ethics (C&IE). It provides
a map of the relations between 400 key terms in the field. This term map can be used to get an overview of concepts and topics
in the field and to identify relations between information and communication technology concepts on the one hand and ethical
concepts on the other hand. To produce the term map, a data set of over thousand articles published in leading journals and
conference proceedings in the C&IE field was constructed. With the help of various computer algorithms, key terms were identified
in the titles and abstracts of the articles and co-occurrence frequencies of these key terms were calculated. Based on the
co-occurrence frequencies, the term map was constructed. This was done using a computer program called VOSviewer. The term
map provides a visual representation of the C&IE field and, more specifically, of the organization of the field around three
main concepts, namely privacy, ethics, and the Internet. 相似文献
999.
Gordon Hull Heather Richter Lipford Celine Latulipe 《Ethics and Information Technology》2011,13(4):289-302
Social networking sites like Facebook are rapidly gaining in popularity. At the same time, they seem to present significant
privacy issues for their users. We analyze two of Facebooks’s more recent features, Applications and News Feed, from the perspective
enabled by Helen Nissenbaum’s treatment of privacy as “contextual integrity.” Offline, privacy is mediated by highly granular
social contexts. Online contexts, including social networking sites, lack much of this granularity. These contextual gaps
are at the root of many of the sites’ privacy issues. Applications, which nearly invisibly shares not just a users’, but a
user’s friends’ information with third parties, clearly violates standard norms of information flow. News Feed is a more complex
case, because it involves not just questions of privacy, but also of program interface and of the meaning of “friendship”
online. In both cases, many of the privacy issues on Facebook are primarily design issues, which could be ameliorated by an
interface that made the flows of information more transparent to users. 相似文献
1000.
Thomas W. Simpson 《Ethics and Information Technology》2011,13(1):29-38
Trust online can be a hazardous affair; many are trustworthy, but some people use the anonymity of the web to behave very badly indeed. So how can we improve the quality of evidence for trustworthiness provided online? I focus on one of the devices we use to secure others’ trustworthiness: tracking past conduct through online reputation systems. Yet existing reputation systems face problems. I analyse these, and in the light of this develop some principles for system design, towards overcoming these challenges. In providing better evidence for trustworthiness online, so we can also encourage people actually to be trustworthy more often, which is an ethically welcome outcome. 相似文献