全文获取类型
收费全文 | 227篇 |
免费 | 5篇 |
国内免费 | 16篇 |
专业分类
教育 | 79篇 |
科学研究 | 87篇 |
体育 | 15篇 |
综合类 | 11篇 |
信息传播 | 56篇 |
出版年
2023年 | 6篇 |
2022年 | 2篇 |
2021年 | 5篇 |
2020年 | 7篇 |
2019年 | 14篇 |
2018年 | 3篇 |
2017年 | 4篇 |
2016年 | 6篇 |
2015年 | 9篇 |
2014年 | 11篇 |
2013年 | 14篇 |
2012年 | 16篇 |
2011年 | 23篇 |
2010年 | 9篇 |
2009年 | 15篇 |
2008年 | 25篇 |
2007年 | 19篇 |
2006年 | 22篇 |
2005年 | 15篇 |
2004年 | 5篇 |
2003年 | 5篇 |
2002年 | 4篇 |
2001年 | 4篇 |
2000年 | 4篇 |
1999年 | 1篇 |
排序方式: 共有248条查询结果,搜索用时 46 毫秒
1.
Automatic Chinese text summarization for dialogue style is a relatively new research area. In this paper, Latent Semantic Analysis (LSA) is first used to extract semantic knowledge from a given document, all question paragraphs are identified, an automatic text segmentation approach analogous to Text'filing is exploited to improve the precision of correlating question paragraphs and answer paragraphs, and finally some "important" sentences are extracted from the generic content and the question-answer pairs to generate a complete summary. Experimental results showed that our approach is highly efficient and improves significantly the coherence of the summary while not compromising informativeness. 相似文献
2.
KATSAGGELOS Aggelos K. 《浙江大学学报(A卷英文版)》2006,7(5):801-810
INTRODUCTION Video streaming is becoming one of the major driving forces of next generation wireless networks. For the currently deployed cellular networks, the practical data rates are not enough to support full rate, high quality video applications. As a result, many research efforts have been devoted to adapting video content to reconcile the conflict between the high demand of video quality and the limited wireless communication resources among users. A large body of literature utiliz… 相似文献
3.
We present a new paradigm for the automatic creation of document headlines that is based on direct transformation of relevant textual information into well-formed textual output. Starting from an input document, we automatically create compact representations of weighted finite sets of strings, called WIDL-expressions, which encode the most important topics in the document. A generic natural language generation engine performs the headline generation task, driven by both statistical knowledge encapsulated in WIDL-expressions (representing topic biases induced by the input document) and statistical knowledge encapsulated in language models (representing biases induced by the target language). Our evaluation shows similar performance in quality with a state-of-the-art, extractive approach to headline generation, and significant improvements in quality over previously proposed solutions to abstractive headline generation. 相似文献
4.
本文系统性地研究面向查询的观点摘要任务,旨在构建一种查询式观点摘要模型框架,探究不同的摘要方法对摘要效果的影响。通过综合考虑情感倾向与句子相似度,从待检文档中抽取出待摘要语句,再结合神经网络和词嵌入技术生成摘要,进而构建面向查询的观点摘要框架。从Debatepedia网站上爬取议题和论述内容构建观点摘要实验数据集,将本文方法应用到该数据集上,以检验不同模型的效果。实验结果表明,在该数据集上,仅使用基于抽取式的方法生成的观点摘要质量更高,取得了最高的平均ROUGE分数、深度语义相似度分数和情感分数,较生成式方法分别提高6.58%、1.79%和11.52%,而比组合式方法提高了8.33%、2.80%和13.86%;同时,本文提出的句子深度语义相似度和情感分数评估指标有助于更好地评估面向查询的观点摘要模型效果。研究结果对于提升面向查询的观点摘要效果,促进观点摘要模型在情报学领域的应用具有重要意义。 相似文献
5.
刘熙载《艺概》研究史以时间维度为经,大致可分为五个时期;而以学界研究的不同视角为纬,又可见出各时期研究的侧重点、主导价值观和成就所在。《艺概》研究史主要成就在上世纪80年代以后20年。综合《艺概》研究不同时期的特点,不仅为《艺概》研究的进一步深入提供历史借鉴,也将引导接下来的刘熙载《艺概》研究。 相似文献
6.
Yoojin Kwon Michelle Lemieux Jill McTavish Nadine Wathen 《Journal of the Medical Library Association》2015,103(4):184-188
Objective
The purpose of this study was to compare effectiveness of different options for de-duplicating records retrieved from systematic review searches.Methods
Using the records from a published systematic review, five de-duplication options were compared. The time taken to de-duplicate in each option and the number of false positives (were deleted but should not have been) and false negatives (should have been deleted but were not) were recorded.Results
The time for each option varied. The number of positive and false duplicates returned from each option also varied greatly.Conclusion
The authors recommend different de-duplication options based on the skill level of the searcher and the purpose of de-duplication efforts. 相似文献7.
Kevin B. Read Alisa Surkis Catherine Larson Aileen McCrillis Alice Graff Joey Nicholson Juanchan Xu 《Journal of the Medical Library Association》2015,103(3):131-135
Objective
The research obtained information to plan data-related products and services.Methods
Biomedical researchers in an academic medical center were selected using purposive sampling and interviewed using open-ended questions based on a literature review. Interviews were conducted until saturation was achieved.Results
Interview responses informed library planners about researchers’ key data issues.Conclusions
This approach proved valuable for planning data management products and services and raising library visibility among clients in the research data realm. 相似文献8.
从青少年网络成瘾的界定、测量评估标准、相关风险因素研究、治疗和干预手段等方面进行综述,意对网络成瘾现象的相关研究提供参考。 相似文献
9.
郑波辉 《信阳师范学院学报(哲学社会科学版)》2015,(1):31-35
马克思作为科学社会主义的创始人,其平等观是社会主义平等问题研究的首要参考。马克思所追求的平等是经济与社会领域实质意义上的平等,其平等观的构建受到西方近代资产阶级平等思想的启发,并实现对后者的批判性超越。以马克思平等观为指导的社会主义平等观与自由主义平等观相比,二者在认识平等问题的方法论、对平等与自由的价值排序以及对待"人生而不同"这一基本事实的态度等问题上均存在差异。当代社会主义市场经济实践是补足马克思平等理想实现条件的现实选择。 相似文献
10.
《Information processing & management》2022,59(3):102913
Abstractive summarization aims to generate a concise summary covering salient content from single or multiple text documents. Many recent abstractive summarization methods are built on the transformer model to capture long-range dependencies in the input text and achieve parallelization. In the transformer encoder, calculating attention weights is a crucial step for encoding input documents. Input documents usually contain some key phrases conveying salient information, and it is important to encode these phrases completely. However, existing transformer-based summarization works did not consider key phrases in input when determining attention weights. Consequently, some of the tokens within key phrases only receive small attention weights, which is not conducive to encoding the semantic information of input documents. In this paper, we introduce some prior knowledge of key phrases into the transformer-based summarization model and guide the model to encode key phrases. For the contextual representation of each token in the key phrase, we assume the tokens within the same key phrase make larger contributions compared with other tokens in the input sequence. Based on this assumption, we propose the Key Phrase Aware Transformer (KPAT), a model with the highlighting mechanism in the encoder to assign greater attention weights for tokens within key phrases. Specifically, we first extract key phrases from the input document and score the phrases’ importance. Then we build the block diagonal highlighting matrix to indicate these phrases’ importance scores and positions. To combine self-attention weights with key phrases’ importance scores, we design two structures of highlighting attention for each head and the multi-head highlighting attention. Experimental results on two datasets (Multi-News and PubMed) from different summarization tasks and domains show that our KPAT model significantly outperforms advanced summarization baselines. We conduct more experiments to analyze the impact of each part of our model on the summarization performance and verify the effectiveness of our proposed highlighting mechanism. 相似文献